Member of Technical Staff, Kernel Engineering

InferactSan Francisco, CA
Hybrid

About The Position

Inferact's mission is to grow vLLM as the world's AI inference engine and accelerate AI progress by making inference cheaper and faster. Founded by the creators and core maintainers of vLLM, we sit at the intersection of models and hardware—a position that took years to build. We're looking for a performance engineer to squeeze every FLOP out of modern accelerators. You'll write the kernels and low-level optimizations that make vLLM the fastest inference engine in the world. Your code will run on hundreds of accelerator types, from NVIDIA GPUs to emerging silicon. When hardware vendors develop new chips, they integrate with vLLM. You'll work directly with these teams to ensure we're extracting maximum performance from every generation of hardware.

Requirements

  • Bachelor's degree or equivalent experience in computer science, engineering, or similar.
  • Deep experience writing CUDA kernels or equivalent (CuTeDSL, Triton, TileLang, Pallas).
  • Strong understanding of GPU architecture: memory hierarchy, warp scheduling, tiling, tensor cores.
  • Proficiency in C++ and Python with demonstrated ability to write high-performance code.
  • Experience with profiling tools (Nsight, rocprof) and performance optimization methodologies.
  • Obsession with benchmarks and squeezing every percentage point of speedup.

Nice To Haves

  • Experience with ML-specific kernel optimization (FlashAttention, fused kernels).
  • Knowledge of quantization techniques (INT8, FP8, mixed-precision).
  • Familiarity with multiple accelerator platforms (NVIDIA, AMD, TPU, Intel).
  • Experience with compiler technologies (LLVM, MLIR, XLA).
  • Kernel-related contributions to vLLM or other inference engine projects.
  • Contributions to open-source GPU, ML systems, or compiler optimization projects.
  • Written deep technical blogs on GPU optimization.

Responsibilities

  • Write kernels and low-level optimizations to make vLLM the fastest inference engine.
  • Ensure maximum performance extraction from every generation of hardware by working directly with hardware vendor teams.
  • Optimize code to run on hundreds of accelerator types, from NVIDIA GPUs to emerging silicon.

Benefits

  • Generous health, dental, and vision benefits
  • 401(k) company match
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service