Software Engineer – AI Inference Engine

FriendliAISan Francisco, CA

About The Position

We are seeking a highly technical Inference Engine Engineer to optimize the performance and efficiency of our core inference engine. In this role, you will focus on designing, implementing, and optimizing GPU kernels and supporting infrastructure for next-generation generative and agentic AI workloads. Your work will directly power the most latency-critical and compute-intensive systems deployed by our customers. We are looking for an exceptional engineer with a strong foundation in GPU programming and compiler infrastructure. The ideal candidate enjoys pushing performance boundaries and has experience supporting production-scale machine learning applications.

Requirements

  • 5+ years of experience in production or high-impact research environments
  • Production-level expertise in Python and C++
  • Bachelor’s or Master's degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent
  • Experience developing machine learning frameworks or performance-critical runtime systems
  • Hands-on experience writing and optimizing GPU kernels
  • Hands-on experience profiling GPU kernels
  • Experience working with generative AI models such as transformer and diffusion models

Nice To Haves

  • Experience developing machine learning compilers or code generation systems
  • Familiarity with dynamic shape compilation, memory planning, and kernel fusion
  • Contributions to inference engines, compilers, or high-performance numerical libraries
  • Understanding of multi-GPU and distributed inference strategies

Responsibilities

  • Design and optimize custom GPU kernels for AI (e.g., transformer and diffusion) workloads
  • Contribute to the development of FriendliAI’s kernel compiler, memory planner, runtime, and other core components.
  • Collaborate with cloud and infrastructure engineers to ensure end-to-end inference performance
  • Analyze performance bottlenecks across the software and hardware stack, and implement targeted optimizations
  • Drive support for new model architectures and tensor compute patterns
  • Maintain production-grade performance infrastructure, including profiling, benchmarking, and validation tools

Benefits

  • Flexible working hours
  • Daily lunch and dinner provided; unlimited snacks and beverages
  • Supportive and highly collaborative work environment
  • Health check-up support and top-tier equipment/hardware support
  • A front-row seat to the generative AI infrastructure revolution
  • Competitive compensation, startup equity, health insurance, and other benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service