NVIDIA DGX™ Cloud is an end-to-end, scalable AI platform for developers, offering scalable capacity built on the latest NVIDIA architecture and co-engineered with the world’s leading cloud service providers (CSPs). We are seeking highly skilled Parallel and Distributed Systems engineers to drive the performance analysis, optimization, and modeling to define the architecture and design of Nvidia’s DGX Cloud clusters. The ideal candidate will have a deep understanding of the methodology to conduct end to end performance analysis of critical AI applications running on large scale parallel and distributed systems. Candidates will work closely with the cross functional teams to define DGX Cloud cluster architecture for different CSPs, optimize workloads running on these systems and develop the methodology that will drive the HW-SW codesign cycle to develop world class AI infrastructure at scale and make them more easily consumable by users (via improved scalability, reliability, cleaner abstractions, etc). What you will be doing : Develop benchmarks, end to end customer applications running at scale, instrumented for performance measurements, tracking, sampling, to measure and optimize performance of important applications and services; Construct carefully designed experiments to analyze, study and develop critical insights into performance bottlenecks, dependencies, from an end to end perspective; Develop ideas on how to improve the end to end system performance and usability by driving changes in the HW or SW (or both). Collaborate with AI researchers, developers, and application service providers to understand internal developer and external customer pain points, requirements, project future needs and share best practice. Develop the necessary modeling framework and the TCO (total cost of ownership) analysis to enable efficient exploration and sweep of the architecture and design space Develop the methodology needed to drive the engineering analysis to Inform the architecture, design and roadmap of DGX Cloud