About The Position

About the Team Our Inference team brings OpenAI’s most capable research and technology to the world through our products. We empower consumers, enterprises and developers alike to use and access our state-of-the-art AI models, allowing them to do things that they’ve never been able to before. We focus on performant and efficient model inference, as well as accelerating research progression via model inference. About the Role We’re hiring engineers to scale and optimize OpenAI’s inference infrastructure across emerging GPU platforms. You’ll work across the stack - from low-level kernel performance to high-level distributed execution - and collaborate closely with research, infra, and performance teams to ensure our largest models run smoothly on new hardware. This is a high-impact opportunity to shape OpenAI’s multi-platform inference capabilities from the ground up with a particular focus on advancing inference performance on AMD accelerators. In this role, you will:

Requirements

  • Have experience writing or porting GPU kernels using HIP, CUDA, or Triton, and care deeply about low-level performance.
  • Are familiar with communication libraries like NCCL/RCCL and understand their role in high-throughput model serving.
  • Have worked on distributed inference systems and are comfortable scaling models across fleets of accelerators.
  • Enjoy solving end-to-end performance challenges across hardware, system libraries, and orchestration layers.
  • Are excited to be part of a small, fast-moving team building new infrastructure from first principles.

Nice To Haves

  • Contributions to open-source libraries like RCCL, Triton, or vLLM.
  • Experience with GPU performance tools (Nsight, rocprof, perf) and memory/comms profiling.
  • Prior experience deploying inference on other non-NVIDIA GPU environments.
  • Knowledge of model/tensor parallelism, mixed precision, and serving 10B+ parameter models.

Responsibilities

  • Own bring-up, correctness and performance of the OpenAI inference stack on AMD hardware.
  • Integrate internal model-serving infrastructure (e.g., vLLM, Triton) into a variety of GPU-backed systems.
  • Debug and optimize distributed inference workloads across memory, network, and compute layers.
  • Validate correctness, performance, and scalability of model execution on large GPU clusters.
  • Collaborate with partner teams to design and optimize high-performance GPU kernels for accelerators using HIP, Triton, or other performance-focused frameworks.
  • Collaborate with partner teams to build, integrate and tune collective communication libraries (e.g., RCCL) used to parallelize model execution across many GPUs.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

1,001-5,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service