At NVIDIA, we are at the forefront of the constantly evolving field of large language models, and their application in agentic and reasoning use cases. As the scale and complexity of these LLM systems continues to increase, we are seeking outstanding engineers to join our team and help shape the future of LLM inference. Our team is dedicated to pushing the boundaries of what's possible with LLMs by improving the algorithmic performance and efficiency of systems that represent them. We constantly reflect on how to improve these systems, developing new inference algorithms and protocols, improving existing models, and seamlessly integrating improvements to ensure NVIDIA's solutions can efficiently handle large-scale, sophisticated tasks. What you'll be doing: Research and Development: Explore and incorporate contemporary research on generative AI, agents, and inference systems into the NVIDIA LLM software stack. Workload Analysis and Optimization: Conduct in-depth analysis, profiling, and optimization of agentic LLM workloads to significantly reduce request latency and increase request throughput while maintaining workflow fidelity. System Design and Implementation: Design and implement scalable systems to accelerate agentic workflows and efficiently handle sophisticated datacenter-scale use cases. Collaboration and Communication: Advise future iterations of NVIDIA software, hardware, and system by engaging with a diverse set of teams at NVIDIA and external partners and formalizing the strategic requirements presented by their workloads.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Education Level
Ph.D. or professional degree
Number of Employees
5,001-10,000 employees