About The Position

NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world. The AI Networking Codesign and Benchmarking R&D group requires a senior software engineer. In this exciting role, you will profile, analyze, and optimize AI workloads on large-scale GPU and CPU clusters used for distributed Deep Learning LLM training and inference. Your primary focus will be collectives communication and networking. You will work across hardware components such as HCAs, Switches, CPUs, GPUs, and Systems. You will also engage with software layers including LLM applications, machine learning frameworks, communication, and computing libraries. Moreover, you will build performance analysis tools and strategies to investigate details and clarify performance expectations, limitations, and bottlenecks. This is your chance to contribute to AI innovation!

Requirements

  • B.Sc in Computer Science or Software Engineering or equivalent experience.
  • 3+ years of experience with high-performance networking (RDMA, MPI, NCCL, SHARP).
  • Demonstrated ability in performance evaluation techniques and approaches.
  • Experience with NVIDIA GPUs and the CUDA library.
  • Knowledge of deep learning frameworks like TensorFlow or PyTorch.
  • Expertise in networking collective communication libraries such as NCCL and protocols like RoCE and RDMA.
  • Fast and self-learning capabilities with strong analytical and problem-solving skills.
  • Proficiency in programming languages: Python, Bash, and C++.
  • Experience with a container-based development environment.
  • Great teammate who communicates clearly and works well with others.

Nice To Haves

  • Extensive understanding and hands-on experience with AI workloads and benchmarking for distributed LLM training.
  • Knowledge in PyTorch, CUDA, and NCCL libraries.
  • Comprehensive system knowledge and understanding (Intel / AMD / ARM CPUs, NVIDIA GPUs, HCA, Memory, PCI).
  • Strong capabilities in performance evaluation and methods using contemporary tools.

Responsibilities

  • Characterizing AI workloads and deep learning models aimed at large-scale LLM training and inference on NVIDIA supercomputers. The role centers on distributed systems with a focus on high-performance networking and NVIDIA communication libraries.
  • Benchmarking, profiling, and analyzing the performance to find bottlenecks and identify areas for improvement and optimizations, with a strong emphasis on networking aspects.
  • Developing PyTorch trace-based profiling, analysis, and replaying toolset to aid in benchmarking, debugging, and co-designing network systems for LLM workloads.
  • Collaborating with multiple teams from hardware to software to provide performance analysis insights.
  • Defining performance test plans, setting performance expectations for new technologies and solutions, and working to achieve performance targets.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service