We are now looking for a Senior Deep Learning Architect, LLM Inference! NVIDIA is at the forefront of the generative AI revolution. The Inference Benchmarking (IB) team specifically focuses on inference server performance optimization for Large Language Models (LLMs). If you're passionate about pushing the boundaries of GPU hardware and software performance and understand terms like disaggregated serving, data parallel attention, MoE, Qwen3.5, DeepSeek, GPT-OSS, then this is a great role for you! What you'll be doing: You will do workload characterization of the latest LLMs and inference servers like vLLM, SGLang and TRT-LLM to ensure NVIDIA maintains its leadership position. Join forces with the performance marketing team to build engaging content, including blog posts and updates to InferenceX to highlight NVIDIA's outstanding inference achievements. Collaborate with engineers from AI startup companies to establish standard benchmarking methodologies. Develop a constantly evolving inference performance data results website. Invent E2E profiling and analysis tools that you will use to keep up with the rapid pace of Generative AI. Contribute to deep learning software projects, such as PyTorch, TRT-LLM, vLLM, and SGLang to drive advancements in the field. Verify that new GPU product launches produce industry leading performance. Collaborate across the company to guide the direction of inference serving, working with software, research, and product teams to ensure best-in-class performance. Use the latest coding agents and inference technology to improve team efficiency.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior