About The Position

NVIDIA is a global leader in physical AI, powering self-driving cars, humanoid robots, intelligent environments, and medical devices. Our software platforms are central to this mission. We help innovators build products that save lives, enhance working conditions, and improve living standards globally! We are hiring a Senior Systems Software Engineer to join our team as a technical expert focused on optimizing deep learning inference for autonomous vehicles and robotics on edge devices. This role requires a hands-on specialist who can examine model architectures at the operator level. They will locate performance issues through kernel trace analysis and evaluate modern architectures (transformers, vision-language models, diffusion/flow matching, state space models) on GPU and SOC. This work directly enhances autonomous vehicles’ and robots’ ability to perceive and respond in real time, yielding immediate benefits. The group works on some of the hardest optimization challenges in the industry, positioned at the convergence of model frameworks, compiler technology, and embedded hardware. We maintain strong collaboration with automotive OEMs, robotics colleagues, and internal hardware teams to extend edge device capabilities.

Requirements

  • Master’s degree or equivalent experience in Computer Science, Electrical Engineering, or a related field.
  • Over 12 years working in the industry, including at least 8 years specializing in deep learning model optimization, inference engineering, or neural network compilation.
  • Proficiency in understanding and reviewing model architectures at the operator/kernel level, not merely handling their operation, is required.
  • Over 5 years of validated expertise in embedded/edge software, with experience delivering production inference solutions within power-limited, latency-sensitive deployment environments.
  • Comprehensive knowledge of contemporary DL architectures: transformers, attention variants, vision encoders (ViT), multi-modal/vision-language model frameworks, as well as experience with diffusion models and/or state space models.
  • Expert knowledge of GPU architecture fundamentals, CUDA, and low-level performance optimization using heterogeneous computing.
  • Experience with TensorRT, compiler IRs, or equivalent inference optimization toolchains.
  • Solid understanding of embedded operating system internals (QNX/Linux), memory management, C/C++, and embedded/system software concepts.
  • Background in parallel programming (e.g., CUDA, OpenMP) and experience reasoning about memory hierarchies, data movement, and compute utilization.
  • Demonstrated capability to collaborate directly with external partners and customers in a deep technical role. You solve their workload issues, identify performance problems, and provide solutions within production limitations.

Nice To Haves

  • Experience with ML compiler frameworks (TVM, MLIR, XLA, Triton) or contributing to inference runtime development.
  • Production deployment experience with autonomous vehicle perception or planning stacks, understanding the full pipeline from sensor input through trajectory output.
  • Familiarity with the Physical AI model landscape: VLM + action expert architectures, end-to-end driving models, or robot foundation models.
  • Contributions to MLPerf benchmarks and large-scale industry performance optimization efforts.
  • Experience with automotive safety standards (ISO 26262, SOTIF) and their implications for inference system development.

Responsibilities

  • Address customer and partner optimization challenges: Engage directly with prominent automotive OEMs and robotics associates to analyze, debug, and improve their deep learning models on NVIDIA platforms. We emphasize delivering solutions rather than just recommendations.
  • Own performance benchmarking: Drive efforts to achieve leading results on MLPerf Edge and industry benchmarks, as well as closed-source engagements with key partners. Define methodology, ensure reproducibility, and turn results into actionable optimization priorities.
  • Evaluate emerging model architectures: Investigate new DL architectures, including vision encoders, multi-modal VLMs, hybrid SSM-Transformer backbones, diffusion/flow matching decoders, and multi-camera tokenizers, regarding compilation feasibility, memory footprint, and latency on target SOCs.
  • Collaborate across teams: Work alongside our compiler, runtime, and hardware groups to link model-level insight with platform capabilities. Contribute to build reviews and help develop internal roadmap priorities based on real customer workload patterns.
  • Represent NVIDIA externally: Share our deep learning optimization expertise at conferences, webinars, and partner events. Help elevate the broader team by bringing back insights and establishing guidelines.
  • Deliver TensorRT and compiler-stack solutions for edge: Build and deploy inference solutions on Jetson, DRIVE, and GPU + ARM platforms for AV and robotics workloads. Develop Proofs of Readiness (PORs) and collaborate closely with our compiler team on Torch-TRT, MLIR-TRT, and related frameworks to bridge performance gaps.

Benefits

  • You will also be eligible for equity and benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service