At NVIDIA, data centers are the engine behind AI. Join us to develop, launch, and operate Open-Source Datacenters. NVIDIA is building the blueprint for AI factories worldwide. We are seeking an SRE to architect and automate the lifecycle of next-generation, open-source-driven datacenters at Gigawatt scale. This role moves beyond infrastructure maintenance to define the industry’s reference operational model for GPU-accelerated computing. Success involves bridging the gap between hardware bring-up and large-scale distributed software to ensure the foundation of global AI remains resilient, scalable, and open. For an engineer driven to tackle the unique telemetry, orchestration, and reliability challenges found only at GW scale, this is an opportunity to build the future of the data center together. What you'll be doing: Running commissioning and provisioning for GPU systems Running the firmware versions of equipment and components, and communicating the supported versions across the organization Through Day-2 operations, keeping tight SLOs around efficiency, performance, and availability. Monitoring the hardware state of the cluster, finding bottlenecks and hot spots, and helping users attain peak performance constantly Triaging the HW break-fix issues and making constant improvements using open-source break-fix solutions. Collaborate with programming and technical divisions to define and implement repeatable procedures. Develop and implement operations strategy & processes, maintaining consistency with SLAs across critically important infrastructure. Develop and apply procedures for minimal downtime and quality controls to strive to achieve continuous uptime. Feeding requirements to software and hardware teams Creation of documentation that the ecosystem can use to run its own AI Data Centers
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level