We are now looking for a Senior Deep Learning Architect for LLM Inference! NVIDIA is at the forefront of the generative AI revolution. Our Inference Benchmarking (IB) team is specifically focused on advanced inference server performance for Large Language Models (LLMs). If you're passionate about pushing the boundaries of GPU hardware and software performance and understand terms like pre-fill phase, generation phase, paged attention, MoE, Tensor Parallel, Llama, Mixtral, and HuggingFace, then this could be a great role for you! What you'll be doing: You will be responsible for characterizing the latest LLMs and inference servers like vLLM and SGLang to ensure that TRT-LLM maintains its leadership position. Join forces with the performance marketing team to build engaging content, including blog posts and other written materials, that highlight TRT-LLM's outstanding achievements. Collaborate with engineers from AI startup companies to debug and establish standard methodologies. Profile GPU kernel-level performance to identify hardware and software optimization opportunities. Develop profiling and analysis software tools that can keep up with the rapid pace of network scaling. Contribute to deep learning software projects, such as PyTorch, TRT-LLM, vLLM, and SGLang to drive advancements in the field. Verify that TRT-LLM's performance meets expectations for new GPU product launches. Collaborate across the company to guide the direction of inference serving, working with software, research, and product teams to ensure world-class performance.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Number of Employees
5,001-10,000 employees