About The Position

NVIDIA is a pioneer in accelerated computing, known for inventing the GPU and driving breakthroughs in gaming, computer graphics, high-performance computing, and artificial intelligence. Our technology powers everything from generative AI to autonomous systems, and we continue to shape the future of computing through innovation and collaboration. Within this mission, our team, Managed AI Research Superclusters (MARS), builds and scales the infrastructure, platforms, and tools that enable researchers and engineers to develop the next generation of AI/ML systems. By joining us, you’ll help design solutions that power some of the world’s most advanced computing workloads. As a member of the Scheduling team, you will participate in the design and implementation of groundbreaking GPU compute clusters that run demanding deep learning, high performance computing, and computationally intensive workloads. We seek engineers with deep technical expertise to identify architectural directions and new approaches for AI workload scheduling to serve many simultaneous and large multi-node GPU workloads with complex requirements and dependencies. This role offers you an excellent opportunity to deliver production grade solutions, get hands on with ground-breaking technology, and work closely with technical leaders solving some of the biggest challenges in machine learning, cloud computing, and system co-design.

Requirements

  • Bachelor’s degree in Computer Science, Electrical Engineering or related field or equivalent experience
  • 5+ years of work experience
  • Strong understanding of batch scheduling, preferably with experience in schedulers such as SLURM or K8s batch schedulers (Kueue, Volcano, etc.)
  • Significant experience in systems programming languages such as C/C++ & Go as well as scripting languages such as Python and bash
  • Established experience in Linux operating system, environment and tools
  • Experience analyzing and tuning performance for a variety of AI workloads
  • In-depth understating of container technologies like Docker, Singularity, Podman
  • Flexibility/adaptability for working in a dynamic environment with different frameworks and requirements
  • Excellent communication, interpersonal and customer collaboration skills

Nice To Haves

  • Knowledge in High-performance computing
  • Open Source Software Contribution
  • Experience with deep learning frameworks like PyTorch and TensorFlow
  • Passionate about SW development processes

Responsibilities

  • Design and develop new scheduling features and add-on services to improve GPU compute clusters across many dimensions, such as resource usage fairness, GPU occupancy, GPU waste, application resilience, application performance and power usage.
  • Design and develop batch workload management and orchestration services
  • Provide support to staff and end users to resolve batch scheduler issues
  • Build and improve our ecosystem around GPU-accelerated computing
  • Performance analysis and optimizations of deep learning workflows
  • Develop large scale automation solutions
  • Root cause analysis and suggest corrective action for problems large and small scales
  • Finding and fixing problems before they occur

Benefits

  • You will also be eligible for equity and benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service