About The Position

At NVIDIA, data centers are the engine behind AI. Join us to develop, launch, and operate Open-Source Datacenters. NVIDIA is building the blueprint for AI factories worldwide. We are seeking an SRE to architect and automate the lifecycle of next-generation, open-source-driven datacenters at Gigawatt scale. This role moves beyond infrastructure maintenance to define the industry’s reference operational model for GPU-accelerated computing. Success involves bridging the gap between hardware bring-up and large-scale distributed software to ensure the foundation of global AI remains resilient, scalable, and open. For an engineer driven to tackle the unique telemetry, orchestration, and reliability challenges found only at GW scale, this is an opportunity to build the future of the data center together.

Requirements

  • BS or MS degree in Computer Engineering/Science, or related field (or equivalent experience) with 10+ overall years of meaningful work experience
  • Experience managing GPU Fleets
  • 10+ years of expertise in improving data center operations or critical infrastructure.
  • Expertise in BMS & Power management.
  • Background in working with Provisioning, Commissioning, and Config Management solutions
  • Experience working with Packer and developing QCOW2 images
  • Background in coordinating with remote hands
  • Experience working with Datacenter Inventory Management Systems like Netbox, Nautilus, or others.
  • Proven track record of working with multiple teams to achieve operational excellence for an organization
  • Experience driving reliability with robust processes, rapid field response, and recovery

Nice To Haves

  • History of involvement with Automated Break-Fix solutions at scale
  • Familiarity with handling a Message Bus and Workflow Engine
  • Hands-on involvement with Zero Touch Provisioning solutions for the network and host

Responsibilities

  • Running commissioning and provisioning for GPU systems
  • Running the firmware versions of equipment and components, and communicating the supported versions across the organization
  • Through Day-2 operations, keeping tight SLOs around efficiency, performance, and availability.
  • Monitoring the hardware state of the cluster, finding bottlenecks and hot spots, and helping users attain peak performance constantly
  • Triaging the HW break-fix issues and making constant improvements using open-source break-fix solutions.
  • Collaborate with programming and technical divisions to define and implement repeatable procedures.
  • Develop and implement operations strategy & processes, maintaining consistency with SLAs across critically important infrastructure.
  • Develop and apply procedures for minimal downtime and quality controls to strive to achieve continuous uptime.
  • Feeding requirements to software and hardware teams
  • Creation of documentation that the ecosystem can use to run its own AI Data Centers

Benefits

  • competitive salaries
  • generous benefits package
  • equity
  • benefits
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service