Nvidia-posted 3 months ago
$184,000 - $356,500/Yr
Senior
Santa Clara, CA
5,001-10,000 employees
Computer and Electronic Product Manufacturing

We are now looking for a Senior Deep Learning Architect for LLM Inference! NVIDIA is at the forefront of the generative AI revolution. Our Inference Benchmarking (IB) team is specifically focused on advanced inference server performance for Large Language Models (LLMs). If you're passionate about pushing the boundaries of GPU hardware and software performance and understand terms like pre-fill phase, generation phase, paged attention, MoE, Tensor Parallel, Llama, Mixtral, and HuggingFace, then this could be a great role for you!

  • Characterizing the latest LLMs and inference servers like vLLM and SGLang to ensure that TRT-LLM maintains its leadership position.
  • Joining forces with the performance marketing team to build engaging content, including blog posts and other written materials, that highlight TRT-LLM's outstanding achievements.
  • Collaborating with engineers from AI startup companies to debug and establish standard methodologies.
  • Profiling GPU kernel-level performance to identify hardware and software optimization opportunities.
  • Developing profiling and analysis software tools that can keep up with the rapid pace of network scaling.
  • Contributing to deep learning software projects, such as PyTorch, TRT-LLM, vLLM, and SGLang to drive advancements in the field.
  • Verifying that TRT-LLM's performance meets expectations for new GPU product launches.
  • Collaborating across the company to guide the direction of inference serving, working with software, research, and product teams to ensure world-class performance.
  • Master's or PhD degree in Computer Science, Computer Engineering, or related fields, or equivalent experience.
  • 6+ years of relevant industry experience.
  • Detailed knowledge of deep learning inference serving, PyTorch programming, profiling, and compiler optimizations.
  • Proficiency in Python and C++ programming languages and familiarity with CUDA.
  • Experience with LLMs and their performance challenges and opportunities.
  • Solid understanding of CPU and GPU microarchitecture and performance characteristics.
  • Experience with complex software projects like frameworks, compilers, or operating systems.
  • Good written and verbal communication skills and the ability to work independently and collaboratively in a fast-paced environment.
  • Demonstrate a drive to continuously improve software and hardware performance.
  • Showcase examples of novel use cases for agentic AI tools in the workplace.
  • Experience with database and visualization tools like D3.js will set you apart.
  • Equity and benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service