Software Engineer, GPU Infrastructure - HPC

OpenAISan Francisco, CA
11h

About The Position

The Fleet team at OpenAI supports the computing environment that powers our cutting-edge research and product development. We oversee large-scale systems that span data centers, GPUs, networking, and more, ensuring high availability, performance, and efficiency. Our work enables OpenAI’s models to operate seamlessly at scale, supporting both internal research and external products like ChatGPT. We prioritize safety, reliability, and responsible AI deployment over unchecked growth. As a software engineer on the Fleet High Performance Computing (HPC) team, you will be responsible for the reliability and uptime of all of OpenAI’s compute fleet. Minimizing hardware failure is key to research training progress and stable services, as even a single hardware hiccup can cause significant disruptions. With increasingly large supercomputers, the stakes continue to rise. Being at the forefront of technology means that we are often the pioneers in troubleshooting these state-of-the-art systems at scale. This is a unique opportunity to work with cutting-edge technologies and devise innovative solutions to maintain the health and efficiency of our supercomputing infrastructure. Our team empowers strong engineers with a high degree of autonomy and ownership, as well as ability to effect change. This role will require a keen focus on system-level comprehensive investigations and the development of automated solutions. We want people who go deep on problems, investigate as thoroughly as possible, and build automation for detection and remediation at scale.

Requirements

  • Experience managing large-scale server environments.
  • A balance of strengths in building and operationalizing.
  • Proficiency in Python, Go, or similar languages.
  • Strong Linux, networking, and server hardware knowledge.
  • Comfort digging into noisy data with SQL, PromQL, and Pandas or any other tool.

Nice To Haves

  • Experience with low level details of hardware components, protocols, and associated Linux tooling (e.g., PCIe, Infiniband, networking, power management, kernel perf tuning)
  • Knowledge of hardware management protocols (e.g., IPMI, Redfish).
  • High-performance computing (HPC) or distributed systems experience.
  • Prior experience developing, managing, or designing hardware.
  • Familiarity with monitoring tools (e.g., Prometheus, Grafana).

Responsibilities

  • Build and maintain automation systems for provisioning and managing server fleets.
  • Develop tools to monitor server health, performance, and lifecycle events.
  • Collaborate with clusters, networking, and infrastructure teams.
  • Partner with external operators to ensure a high level of quality.
  • Identify and fix performance bottlenecks and inefficiencies.
  • Continuously improve automation to reduce manual work.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

1,001-5,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service