AI and ML HPC Cluster Engineer

NVIDIAAustin, CA
3d

About The Position

NVIDIA is a pioneer in accelerated computing, known for inventing the GPU and driving breakthroughs in gaming, computer graphics, high-performance computing, and artificial intelligence. Our technology powers everything from generative AI to autonomous systems, and we continue to shape the future of computing through innovation and collaboration. Within this mission, our team, Managed AI Superclusters (MARS) builds and scales the infrastructure, platforms, and tools that enable researchers and engineers to develop the next generation of AI/ML systems. By joining us, you’ll help design solutions that power some of the world’s most advanced computing workloads. NVIDIA is looking for an AI/ML HPC Cluster Engineer to join our MARS team. You will provide technical engagement and problem solving on the management of large-scale HPC systems including the deployment of compute, networking, and storage. You will be working with a team of passionate and skilled engineers across NVIDIA that are continuously working to provide better tools to build and manage this infrastructure. Ideal candidate is strong in Linux administration, networking, storage, job schedulers, driving improvements, and has the ability to understand researcher computing needs.

Requirements

  • Bachelor’s degree in Computer Science, Electrical Engineering or related field or equivalent experience
  • Minimum 2 years of experience administering multi-node compute infrastructure
  • Background in managing AI/HPC job schedulers like Slurm, K8s, PBS, RTDA, BCM (formerly known as Bright), or LSF
  • Proficient in administering Centos/RHEL and/or Ubuntu Linux distributions
  • Proven understanding of cluster configuration management tools (Ansible, Puppet, Salt, etc.), container technologies (Docker, Singularity, Podman, Shifter, Charliecloud), Python programming, and bash scripting.
  • Passion for continual learning and staying ahead of emerging technologies and effective approaches in the HPC and AI/ML infrastructure fields.

Nice To Haves

  • Background with NVIDIA GPUs, CUDA Programming, NCCL and MLPerf benchmarking
  • Experience with AI/ML concepts, algorithms, models, and frameworks (PyTorch, Tensorflow)
  • Experience with InfiniBand with IBOP and RDMA
  • Understanding of fast, distributed storage systems such as Lustre and GPFS for AI/HPC workloads
  • Applied knowledge in AI/HPC workflows that involve MPI

Responsibilities

  • Support day-to-day operations of production on-premises and multi-cloud AI/HPC clusters, ensuring system health, user satisfaction, and efficient resource utilization.
  • Directly administer internal research clusters, conduct upgrades, incident response, and reliability improvements.
  • Develop and improve our ecosystem around GPU-accelerated computing including developing scalable automation solutions.
  • Maintain heterogeneous AI/ML clusters on-premises and in the cloud.
  • Support our researchers to run their workloads including performance analysis and optimizations
  • Analyze and optimize cluster efficiency, job fragmentation, and GPU waste to meet internal SLA targets.
  • Support root cause analysis and suggest corrective action.
  • Proactively find and fix issues before they occur.
  • Triage and support postmortems for reliability incidents affecting users or infrastructure.
  • Participate in a shared on-call rotation supported by strong automation, clear paths for responding to critical issues, and well-defined incident workflows.

Benefits

  • You will also be eligible for equity and benefits .
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service