About The Position

NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world. We are seeking a skilled HPC/AI Benchmarking and Telemetry Engineer to join our team and drive performance insights across our most advanced computing infrastructure. In this role, you'll develop and carry out detailed benchmarking approaches for large-scale HPC and AI clusters. You will also build telemetry frameworks that provide access to system performance from the host level through network and data center infrastructure. This is an outstanding opportunity to work where brand-new hardware, software, and infrastructure intersect. You will help NVIDIA and our customers unlock the full potential of GPU-accelerated computing. You'll collaborate with engineering teams, customers, and partners to ensure our platforms deliver outstanding performance and reliability in real-world production environments.

Requirements

  • Bachelor's degree in Computer Science, Electrical Engineering, Computer Engineering, or a related field (or equivalent experience).
  • 8+ years of direct experience working with HPC and/or AI infrastructure, including cluster deployment, performance analysis, and benchmarking.
  • Deep expertise in Linux system administration, including kernel tuning, process scheduling, storage I/O optimization, and solving performance issues at scale.
  • Proven experience crafting and implementing telemetry and monitoring solutions for large-scale distributed systems, with proficiency in tools such as Prometheus, Grafana, DCGM, collectd, or similar observability platforms.
  • Solid grasp of GPU architectures, CUDA programming principles, and GPU performance traits in high-performance computing and artificial intelligence workloads.
  • Familiarity with job schedulers (Slurm, PBS, LSF) and container orchestration platforms (Kubernetes, Docker) in HPC/AI environments.
  • Proficiency in Python, Bash, and other scripting languages for automation, data analysis, and workflow orchestration.
  • Excellent analytical and problem-solving skills with the ability to interpret complex performance data and communicate findings to both technical and non-technical audiences.

Nice To Haves

  • Experience with high-performance networking technologies including InfiniBand, RoCE, and Ethernet fabric tuning and performance analysis.
  • Knowledge of parallel filesystems such as Lustre, GPFS, BeeGFS, Weka, or VAST, including performance tuning and benchmarking.
  • Background in power and thermal management for high-density compute environments, including PUE optimization and liquid cooling technologies.
  • Contributions to open-source benchmarking tools or performance analysis frameworks.
  • Industry certifications such as RHCE, CKA, or vendor-specific HPC/data center credentials.

Responsibilities

  • Formulate benchmarking methods for high-performance computing and AI tasks.
  • Perform and bring these methods to completion on large-scale GPU clusters.
  • Assess performance metrics to detect optimization opportunities and upgrade architecture.
  • Develop and maintain telemetry infrastructure to capture performance data. This data spans host-level GPU/CPU metrics, network fabric utilization, and power/thermal characteristics within the facility.
  • Collaborate closely with hardware engineering, software development, and customer-facing teams to define performance requirements, fix bottlenecks, and validate configurations against real-world workloads.
  • Deploy and manage observability stacks including monitoring tools like Prometheus, visualization platforms such as Grafana, NVIDIA's DCGM, and custom telemetry solutions to provide actionable insights into cluster health, utilization, and performance trends.
  • Work directly with engineering and collaborate with internal partners to understand their performance requirements, conduct on-site benchmarking engagements, and deliver detailed analysis and recommendations for workload optimization.
  • Maintain extensive knowledge of industry-standard benchmarks in advanced computing and machine learning fields such as HPL, HPCG, MLPerf, and NCCL tests.
  • Contribute to developing new benchmarking methodologies for emerging workloads.

Benefits

  • NVIDIA offers highly competitive salaries and a comprehensive benefits package.
  • As you plan your future, see what we can offer to you and your family www.nvidiabenefits.com/
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service