About The Position

NVIDIA is at the forefront of the generative AI revolution. The Inference Benchmarking (IB) team specifically focuses on inference server performance optimization for Large Language Models (LLMs). This role is for individuals passionate about pushing the boundaries of GPU hardware and software performance and who understand terms like disaggregated serving, data parallel attention, MoE, Qwen3.5, DeepSeek, and GPT-OSS.

Requirements

  • Master's or PhD degree in Computer Science, Computer Engineering, related fields, or equivalent experience.
  • Relevant software development experience.
  • Detailed knowledge of deep learning inference serving, PyTorch programming, profiling, and compiler optimizations.
  • Experience developing client server LLM applications with OpenAI API or MCP and identifying performance bottlenecks.
  • Solid understanding of CPU and GPU microarchitecture and performance characteristics.
  • Experience with complex software projects like frameworks, compilers, or operating systems.
  • Demonstrated proficiency with the latest AI coding agents like Claude Code, Codex, and Cursor
  • Excellent written and verbal communication skills and the ability to work independently and collaboratively in a fast-paced environment.

Nice To Haves

  • Demonstrate a drive to continuously improve software and hardware performance.
  • Showcase examples of novel use cases for agentic AI tools in the workplace.
  • Experience with databases and visualization tools will set you apart.

Responsibilities

  • Do workload characterization of the latest LLMs and inference servers like vLLM, SGLang and TRT-LLM to ensure NVIDIA maintains its leadership position.
  • Join forces with the performance marketing team to build engaging content, including blog posts and updates to InferenceX to highlight NVIDIA's outstanding inference achievements.
  • Collaborate with engineers from AI startup companies to establish standard benchmarking methodologies.
  • Develop a constantly evolving inference performance data results website.
  • Invent E2E profiling and analysis tools that you will use to keep up with the rapid pace of Generative AI.
  • Contribute to deep learning software projects, such as PyTorch, TRT-LLM, vLLM, and SGLang to drive advancements in the field.
  • Verify that new GPU product launches produce industry leading performance.
  • Collaborate across the company to guide the direction of inference serving, working with software, research, and product teams to ensure best-in-class performance.
  • Use the latest coding agents and inference technology to improve team efficiency.

Benefits

  • equity
  • benefits

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Entry Level

Number of Employees

5,001-10,000 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service