About The Position

NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world. What You'll Be Doing: The software architecture group at NVIDIA has openings for a Deep Learning Communication Architect. We scale the DNN models and training/inference frameworks to systems with hundreds of thousands of nodes. Optimizing communication performance: Identify and eliminate bottlenecks in data transfer and synchronization during distributed deep learning training and inference. Designing efficient communication protocols: Develop and implement communication algorithms and protocols tailored for deep learning workloads, minimizing communication overhead and latency. Hardware and software co-craft: Collaborate with hardware and software teams to craft systems that effectively apply high-speed interconnects (e.g., NVLink, InfiniBand, SPC-X) and communication libraries (e.g., MPI, NCCL, UCX, UCC, NVSHMEM). Exploring innovative communication technologies: Research and evaluate new communication technologies and techniques to enhance the performance and scalability of deep learning systems. Developing and implementing solutions: Build proofs-of-concept, conduct experiments, and perform quantitative modeling to validate and deploy new communication strategies.

Requirements

  • A Ph.D., Masters, or BS in Computer Science (CS), Electrical Engineering (EE), Computer Science and Electrical Engineering (CSEE), or a closely related field or equivalent experience.
  • 6+ years of experience in Building DNNs, Scaling of DNNs, Parallelism of DNN frameworks, or deep learning training and inference workloads.
  • Experience in evaluating, analyzing, and optimizing LLM training and inference performance of state-of-the-art models on cutting-edge hardware.
  • Deep understanding of parallelism techniques, including Data Parallelism, Pipeline Parallelism, Tensor Parallelism, Expert Parallelism, and FSDP.
  • Understanding of the emerging serving architectures like Disaggregated Serving and inference servers like Dynamo and Triton
  • Proficiency in developing code for one or more deep neural network (DNN) training and Inference frameworks, such as PyTorch, TensorRT-LLM, vLLM, SGLang.
  • Strong programming skills in C++ and Python.
  • Familiarity with GPU computing, including CUDA and OpenCL, and familiarity with InfiniBand and RoCE networks.
  • CUDA and OpenCL, and familiarity with InfiniBand and RoCE networks.

Nice To Haves

  • Prior contributions to one or more DNN training and Inference frameworks as part of your previous work experience.
  • Deep understanding and contributions to the scaling of LLMs on large-scale systems.

Responsibilities

  • Identify and eliminate bottlenecks in data transfer and synchronization during distributed deep learning training and inference.
  • Develop and implement communication algorithms and protocols tailored for deep learning workloads, minimizing communication overhead and latency.
  • Collaborate with hardware and software teams to craft systems that effectively apply high-speed interconnects (e.g., NVLink, InfiniBand, SPC-X) and communication libraries (e.g., MPI, NCCL, UCX, UCC, NVSHMEM).
  • Research and evaluate new communication technologies and techniques to enhance the performance and scalability of deep learning systems.
  • Build proofs-of-concept, conduct experiments, and perform quantitative modeling to validate and deploy new communication strategies.

Benefits

  • With competitive salaries and a generous benefits package, we are widely considered to be one of the technology world’s most desirable employers.
  • You will also be eligible for equity and benefits

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service