About The Position

NVIDIA is seeking an exceptional Manager, Deep Learning Inference Software, to lead a world-class engineering team advancing the state of AI model deployment. You will shape the software powering today’s most sophisticated AI systems — from large language models to multimodal generative AI — all accelerated on NVIDIA GPUs. The Deep Learning Inference team develops and optimizes open-source frameworks that make AI deployment scalable, efficient, and accessible — including SGLang, vLLM, and FlashInfer. Our work enables developers worldwide to harness NVIDIA accelerators for real-time inference at every scale, from datacenter clusters to edge devices. If you’re a passionate technical leader ready to shape the future of AI inference frameworks — and build the software that powers the world’s most advanced models — we’d love to hear from you.

Requirements

  • MS, PhD, or equivalent experience in Computer Science, Electrical/Computer Engineering, or a related field.
  • 6+ years of software development experience, including 3+ years in technical leadership or engineering management.
  • Strong background in C/C++ software design and development; proficiency in Python is a plus.
  • Hands-on experience with GPU programming (CUDA, Triton, CUTLASS) and performance optimization.
  • Proven record of deploying or optimizing deep learning models in production environments.
  • Experience leading teams using Agile or collaborative software development practices.

Nice To Haves

  • Significant open-source contributions to deep learning or inference frameworks such as PyTorch, vLLM, SGLang, Triton, or TensorRT-LLM.
  • Deep understanding of multi-GPU communications (NIXL, NCCL, NVSHMEM) and distributed inference architectures.
  • Expertise in performance modeling, profiling, and system-level optimization across CPU and GPU platforms.
  • Proven ability to mentor engineers, guide architectural decisions, and deliver complex projects with measurable impact.
  • Publications, patents, or talks on LLM serving, model optimization, or GPU performance engineering.

Responsibilities

  • Lead, mentor, and scale a high-performing engineering team focused on deep learning inference and GPU-accelerated software.
  • Drive the strategy, roadmap, and execution of NVIDIA’s inference frameworks engineering, focusing on SGLang.
  • Partner with internal compiler, libraries, and research teams to deliver end-to-end optimized inference pipelines across NVIDIA accelerators.
  • Oversee performance tuning, profiling, and optimization of large-scale models for LLM, multimodal, and generative AI applications.
  • Guide engineers in adopting best practices for CUDA, Triton, CUTLASS, and multi-GPU communications (NIXL, NCCL, NVSHMEM).
  • Represent the team in roadmap and planning discussions, ensuring alignment with NVIDIA’s broader AI and software strategies.
  • Foster a culture of technical excellence, open collaboration, and continuous innovation.

Benefits

  • highly competitive salaries
  • comprehensive benefits package
  • endless opportunities for career advancement
  • equity
  • benefits
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service