Software Engineer, Hardware

OpenAISan Francisco, CA
9dHybrid

About The Position

OpenAI’s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI’s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI. As a software engineer on the Scaling team, you’ll help build and optimize the low-level stack that orchestrates computation and data movement across OpenAI’s supercomputing clusters. Your work will involve designing high-performance runtimes, building custom kernels, contributing to compiler infrastructure, and developing scalable simulation systems to validate and optimize distributed training workloads. You will work at the intersection of systems programming, ML infrastructure, and high-performance computing, helping to create both ergonomic developer APIs and highly efficient runtime systems. This means balancing ease of use and introspection with the need for stability and performance on our evolving hardware fleet. This role is based in San Francisco, CA, with a hybrid work model (3 days/week in-office). Relocation assistance is available.

Requirements

  • Have a deep curiosity for how large-scale systems work and enjoy making them faster, simpler, and more reliable.
  • Are proficient in systems programming (e.g., Rust, C++) and scripting languages like Python.
  • Have experience in one or more of the following areas: compiler development, kernel authoring, accelerator programming, runtime systems, distributed systems, or high-performance simulation.
  • Are excited to work in a fast-paced, highly collaborative environment with evolving hardware and ML system demands.
  • Value engineering excellence, technical leadership, and thoughtful system design.

Responsibilities

  • Design and build APIs and runtime components to orchestrate computation and data movement across heterogeneous ML workloads.
  • Contribute to compiler infrastructure, including the development of optimizations and compiler passes to support evolving hardware.
  • Engineer and optimize compute and data kernels, ensuring correctness, high performance, and portability across simulation and production environments.
  • Profile and optimize system bottlenecks, especially around I/O, memory hierarchy, and interconnects, at both local and distributed scales.
  • Develop simulation infrastructure to validate runtime behaviors, test training stack changes, and support early-stage hardware and system development.
  • Rapidly deploy runtime and compiler updates to new supercomputing builds in close collaboration with hardware and research teams.
  • Work across a diverse stack, primarily using Rust and Python, with opportunities to influence architecture decisions across the training framework.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

1,001-5,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service