NVIDIA-posted 10 days ago
Full-time • Senior
Us, CA
5,001-10,000 employees

NVIDIA DGX™ Cloud is an end-to-end, scalable AI platform for developers, offering scalable capacity built on the latest NVIDIA architecture and co-engineered with the world’s leading cloud service providers (CSPs). We are seeking highly skilled Parallel and Distributed Systems engineers to drive the performance analysis, optimization, and modeling to define the architecture and design of Nvidia’s DGX Cloud clusters. The ideal candidate will have a deep understanding of the methodology to conduct end to end performance analysis of critical AI applications running on large scale parallel and distributed systems. Candidates will work closely with the cross functional teams to define DGX Cloud cluster architecture for different CSPs, optimize workloads running on these systems and develop the methodology that will drive the HW-SW codesign cycle to develop world class AI infrastructure at scale and make them more easily consumable by users (via improved scalability, reliability, cleaner abstractions, etc).

  • Develop benchmarks, end to end customer applications running at scale, instrumented for performance measurements, tracking, sampling, to measure and optimize performance of important applications and services
  • Construct carefully designed experiments to analyze, study and develop critical insights into performance bottlenecks, dependencies, from an end to end perspective
  • Develop ideas on how to improve the end to end system performance and usability by driving changes in the HW or SW (or both).
  • Collaborate with AI researchers, developers, and application service providers to understand internal developer and external customer pain points, requirements, project future needs and share best practice.
  • Develop the necessary modeling framework and the TCO (total cost of ownership) analysis to enable efficient exploration and sweep of the architecture and design space
  • Develop the methodology needed to drive the engineering analysis to Inform the architecture, design and roadmap of DGX Cloud
  • Expertise in working with large scale parallel and distributed accelerator-based system systems
  • Expertise optimizing performance and AI workloads on large scale systems
  • Experience with performance modeling and benchmarking at scale
  • Strong background in Computer Architecture, Networking, Storage systems, Accelerators
  • Familiarity with popular AI frameworks (PyTorch, TensorFlow, JAX, Megatron-LM, Tensort-LLM, VLLM) among others
  • Experience with AI/ML models and workloads, in particular LLMs as well as an u nderstanding of DNNs and their use in emerging AI/ML applications and services
  • Bachelors/Masters in Engineering or equivalent experience (preferably, Electrical Engineering, Computer Engineering, or Computer Science)
  • 10 years experience in the above areas
  • Proficiency in Python, C/C++
  • Expertise with at least one of public CSP infrastructure (GCP, AWS, Azure, OCI, …)
  • PhD in the relevant areas
  • Very high intellectual curiosity
  • Confidence to dig in as needed
  • Not afraid of confronting complexity
  • Able to pick up new areas quickly
  • Proficiency in CUDA, XLA
  • Excellent interpersonal skills
  • Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
  • The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.
  • You will also be eligible for equity and benefits .
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service