About The Position

NVIDIA is at the forefront of the AI revolution, specifically in the constantly evolving field of Embodied AI. We are seeking a high-caliber Deep Learning Engineer to bridge the gap between cutting-edge multimodal architectures and real-time robotic execution for autonomous vehicles. In this role, you will design and implement SOTA algorithms to make LLM/VLM fast, lean, and reliable enough to power an end-to-end driving stack. You won’t just be "running" models; you will be re-architecting them for the edge, ensuring that models capable of complex scene reasoning can operate within the strict latency and safety constraints of an AV compute platform.

Requirements

  • PhD with 4+ years, MS with 6+ years, or BS (or equivalent experience) with 8+ years of relevant experience in Computer Science, Computer Engineering, or a related technical field.
  • Expert-level proficiency in PyTorch, JAX, or similar machine learning frameworks.
  • Sophisticated proficiency with modern LLM/VLM inference stacks, such as vLLM, TensorRT-LLM and SGLang.
  • A proven track record of training, deploying, or optimizing large-scale DL models in production environments.
  • Deep familiarity with NVIDIA’s deep learning SDKs, specifically TensorRT and CUDA.
  • Strong understanding of GPU architecture, the compilation stack, and the ability to debug end-to-end performance across the hardware/software boundary.

Nice To Haves

  • Deep experience with LLM, VLM, and VLA model optimization, specifically tailored for real-time robotic control, embodied AI, and autonomous decision-making.
  • Proven track record of implementing low-bit inference
  • Prior experience writing custom high-performance kernels using CUDA, Triton, or CUTLASS to accelerate non-standard neural network layers and specialized attention mechanisms.
  • Active contributions to open-source inference and optimization libraries such as vLLM, SGLang and TensorRT-LLM.
  • Thorough understanding of the unique constraints of real-time robotics, including safety-critical determinism, hardware-in-the-loop (HIL) testing, and ultra-low latency requirements.

Responsibilities

  • Develop SOTA model optimization techniques, such as speculative decoding with block diffusion, KV cache streaming, and Prefill–Decode separation, etc. to boost E2E model performance for production deployments.
  • Implement advanced compression techniques including Quantization (FP4/FP8), pruning, and knowledge distillation to minimize model footprints without compromising safety-critical accuracy.
  • Design high-performance optimization strategies for inference, including automated model sharding (tensor/sequence parallelism) and the development of efficient attention kernels optimized for KV-caching.
  • Conduct deep, layer-by-layer model profiling to identify compute and memory bottlenecks, driving targeted optimizations for real-time execution.
  • Leverage the PyTorch ecosystem to extract standardized model graph representations and automate deployment pipelines for TensorRT conversion.
  • Scale DL model performance across diverse NVIDIA edge architectures, maximizing the throughput of specialized accelerators on the road.
  • Architect the software interface to seamlessly integrate and interact with large-scale models within a high-performance C++ production environment.
  • Partner with research, TensorRT, and Cosmos teams to translate breakthrough innovations into shipping product solutions.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

Ph.D. or professional degree

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service