About The Position

The Frontier Systems team at OpenAI builds, launches, and supports the largest supercomputers in the world that OpenAI uses for its most cutting edge model training. We take data center designs, turn them into real, working systems and build any software needed for running large-scale frontier model trainings. Our mission is to bring up, stabilize and keep these hyperscale supercomputers reliable and efficient during the training of the frontier models. We are looking for engineers to operate the next generation of compute clusters that power OpenAI’s frontier research. This role blends distributed systems engineering with hands-on infrastructure work on our largest datacenters. You will scale Kubernetes clusters to massive scale, automate bare-metal bring-up, and build the software layer that hides the complexity of a magnitude of nodes across multiple data centers. You will work at the intersection of hardware and software, where speed and reliability are critical. Expect to manage fast-moving operations, quickly diagnose and fix issues when things are on fire, and continuously raise the bar for automation and uptime.

Requirements

  • Experience as an infrastructure, systems, or distributed systems engineer in large-scale or high-availability environments
  • Strong knowledge of Kubernetes internals, cluster scaling patterns, and containerized workloads
  • Proficiency in cloud infrastructure concepts (compute, networking, storage, security) and in automating cluster or data center operations

Nice To Haves

  • Have deep experience operating or scaling Kubernetes clusters or similar container orchestration systems in high-growth or hyperscale environments
  • Bring strong programming or scripting skills (Python, Go, or similar) and familiarity with Infrastructure-as-Code tools such as Terraform or CloudFormation
  • Are comfortable with bare-metal Linux environments, GPU hardware, and large-scale networking
  • Enjoy solving fast-moving, high-impact operational problems and building automation to eliminate manual work
  • Can balance careful engineering with the urgency of keeping mission-critical systems running
  • Bonus: background with GPU workloads, firmware management, or high-performance computing

Responsibilities

  • Spin up and scale large Kubernetes clusters, including automation for provisioning, bootstrapping, and cluster lifecycle management
  • Build software abstractions that unify multiple clusters and present a seamless interface to training workloads
  • Own node bring-up from bare metal through firmware upgrades, ensuring fast, repeatable deployment at massive scale
  • Improve operational metrics such as reducing cluster restart times (e.g., from hours to minutes) and accelerating firmware or OS upgrade cycles
  • Integrate networking and hardware health systems to deliver end-to-end reliability across servers, switches, and data center infrastructure
  • Develop monitoring and observability systems to detect issues early and keep clusters stable under extreme load

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

1,001-5,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service