About The Position

NVIDIA is leading the way in groundbreaking developments in Artificial Intelligence, High Performance Computing and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables amazing creativity and discovery, and powers what were once science fiction inventions from artificial intelligence to autonomous cars. We are looking for a motivated Deep Learning engineer to bring advanced communication technologies into AI stacks, including PyTorch, TRT-LLM, vLLM, SGLang, JAX, etc. You will be working with the team that created communication libraries like NCCL, NVSHMEM & technology like GPUDirect -- for scaling Deep Learning and HPC applications. Your customers will have diverse multi-GPU demands, ranging from training on scales up to 100K GPUs to inference down at microsecond latency. Communication performance between the GPUs has a direct impact on AI applications. Your work in AI toolkits will make all of those easier for the community. This is an outstanding opportunity for someone with an AI background to advance the state of the art in this space. Are you ready to contribute to the development of innovative technologies and help realize NVIDIA's vision?

Requirements

  • B.S, M.S. or PHD in Computer Science, or related field (or equivalent experience) with 5+ software engineering and HPC/AI experience
  • Development or integration experience with Deep Learning Frameworks such PyTorch, JAX, and Inference Engines such as TRT-LLM, vLLM, SGLang
  • Rapid prototyping and development with Python, C++, CUDA or related DSLs (Triton, cuTe)
  • Solid grasp of AI models, parallelisms, and/or compiler technologies (e.g. torch.compile)
  • Experience conducting performance benchmarking on AI clusters.
  • Familiarity with at least one performance profiler toolchain (PyTorch profiler, NVIDIA Nsight Systems)
  • Understanding of HPC/AI communication concepts (1-sided v 2-sided communication, elasticity, resiliency, topology discovery, etc)
  • Adaptability and passion to learn new areas and tools
  • Flexibility to work and communicate effectively across different teams and timezones

Nice To Haves

  • Experience with parallel programming on at least one communication runtime (NCCL, NVSHMEM, MPI).
  • Good understanding of computer system architecture, HW-SW interactions and operating systems principles (aka systems software fundamentals)
  • Expertise in one or more of these areas: Training, Distributed inference, MoE, Reinforcement Learning, kernel authoring (on CUDA, Triton, cuTe, etc).
  • Experience with programming for compute & communication overlap in distributed runtimes
  • Experience with AI compiler pattern matching and lowering.
  • Solid understanding of memory hierarchy, consistency model, and tensor layout

Responsibilities

  • Integrate new communication libraries features in AI frameworks: from PoC to performance analysis to production
  • Perform deep analysis of AI workloads and frameworks to identify multi-GPU communication requirements and opportunities.
  • Collaborate hands-on with teams working on the latest AI models.
  • Improve AI compilers to hide communications or perform automatic fusion.
  • Conduct in-depth AI workload performance characterization on multi-GPU clusters.
  • Design fault-tolerant and elastic solutions for large-scale or dynamic AI workloads.
  • Author custom communication or fused compute-communication kernels to showcase ultimate performance on NV platforms.
  • Influence the roadmap of communication libraries - NCCL & NVSHMEM.
  • Collaborate with a very dynamic team across multiple time zones.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service