About The Position

We are now looking for a High-Performance LLM Training Engineer! NVIDIA is seeking experienced engineers specializing in performance analysis and optimization to improve the efficiency of LLM training workloads, which are shaping the world's most advanced computing systems. This position focuses on optimizing NVIDIA’s high-performance LLM software stack in frameworks like PyTorch and JAX for high-performance training on thousands of GPUs, while also helping shape hardware roadmaps for the next generation of GPUs powering the AI revolution.

Requirements

  • MS in Computer Science, Electrical Engineering or Computer Engineering (or equivalent experience).
  • Strong background in deep learning and neural networks, in particular training.
  • A deep background in computer architecture and familiarity with the fundamentals of GPU architecture.
  • Proven experience analyzing and tuning application performance & processor and system-level performance modeling.
  • Programming skills in C++, Python, and CUDA.

Responsibilities

  • Understand, analyze, profile, and optimize AI training workloads on innovative hardware and software platforms.
  • Understand the big picture of training performance on GPUs, prioritizing and then solving problems across all state-of-the-art neural networks.
  • Implement production-quality software in multiple layers of NVIDIA's deep learning platform stack, from drivers to DL frameworks.
  • Build and support NVIDIA submissions to the MLPerf Training benchmark suite.
  • Implement key DL training workloads in NVIDIA's proprietary processor and system simulators to enable future architecture studies.
  • Build tools to automate workload analysis, workload optimization, and other critical workflows.

Benefits

  • You will also be eligible for equity and benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service