About The Position

We are now looking for a Senior Deep Learning Performance Architect! NVIDIA is seeking outstanding Performance Architects to help analyze and develop the next generation of architectures that accelerate AI and high-performance computing applications. Intelligent machines powered by Artificial Intelligence computers that can learn, reason and interact with people are no longer science fiction. GPU Deep Learning has provided the foundation for machines to learn, perceive, reason and solve problems. NVIDIA's GPUs run AI algorithms, simulating human intelligence, and act as the brains of computers, robots and self-driving cars that can perceive and understand the world. Come, join our Deep Learning Architecture team, where you can help build real-time, cost-effective computing platforms driving our success in this exciting and rapidly growing field!

Requirements

  • MS or PhD in a Computer Science, Computer Engineering, Electrical Engineering or equivalent experience.
  • 6+ years of relevant industry or research work experience.
  • Strong background in analytical and probabilistic modeling.
  • 2+ years of experience in parallel computing architectures, distributed systems, or interconnect fabrics.
  • A strong understanding of distributed deep learning workloads scheduling in large scale systems.
  • Proficiency in Python for building performance and reliability models.

Nice To Haves

  • Direct experience managing or troubleshooting large-scale jobs—you understand how jobs actually fail and recover in production.
  • Experience working with large-scale operational datasets (e.g., scheduler or hardware telemetry).
  • Knowledge of how orchestrators (e.g., Slurm, Kubernetes, PyTorch) manage workload recovery and job scheduling under failures.
  • Ability to simplify and communicate rich technical concepts with a non-technical audience.

Responsibilities

  • Develop innovative HW architectures to extend the state of the art in parallel computing performance, energy efficiency and programmability.
  • Build the mathematical frameworks required to reason about system availability and workload goodput at massive scales.
  • Reason about overall Deep Learning workload performance under various scheduling, parallelization, and resiliency strategies.
  • Conduct "what-if" studies on hardware configurations, infrastructure knobs, and workload strategies to identify optimal system-level trade-offs.
  • Work closely with wider architecture and product teams to guide the hardware/software roadmap using data-driven performance and reliability projections.
  • Build and refine high-level simulators in python to model the interaction between knobs that impact performance and resiliency.

Benefits

  • You will also be eligible for equity and benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service