About The Position

OpenAI’s Inference team ensures that our most advanced models run efficiently, reliably, and at scale. We build and optimize the systems that power our production APIs, internal research tools, and experimental model deployments. As model architectures and hardware evolve, we’re expanding support for a broader set of compute platforms - including AMD GPUs - to increase performance, flexibility, and resiliency across our infrastructure. We are forming a team to generalize our inference stack - including kernels, communication libraries, and serving infrastructure - to alternative hardware architectures.

Requirements

  • Have experience writing or porting GPU kernels using HIP, CUDA, or Triton, and care deeply about low-level performance.
  • Are familiar with communication libraries like NCCL/RCCL and understand their role in high-throughput model serving.
  • Have worked on distributed inference systems and are comfortable scaling models across fleets of accelerators.
  • Enjoy solving end-to-end performance challenges across hardware, system libraries, and orchestration layers.
  • Are excited to be part of a small, fast-moving team building new infrastructure from first principles.

Nice To Haves

  • Contributions to open-source libraries like RCCL, Triton, or vLLM.
  • Experience with GPU performance tools (Nsight, rocprof, perf) and memory/comms profiling.
  • Prior experience deploying inference on other non-NVIDIA GPU environments.
  • Knowledge of model/tensor parallelism, mixed precision, and serving 10B+ parameter models.

Responsibilities

  • Own bring-up, correctness and performance of the OpenAI inference stack on AMD hardware.
  • Integrate internal model-serving infrastructure (e.g., vLLM, Triton) into a variety of GPU-backed systems.
  • Debug and optimize distributed inference workloads across memory, network, and compute layers.
  • Validate correctness, performance, and scalability of model execution on large GPU clusters.
  • Collaborate with partner teams to design and optimize high-performance GPU kernels for accelerators using HIP, Triton, or other performance-focused frameworks.
  • Collaborate with partner teams to build, integrate and tune collective communication libraries (e.g., RCCL) used to parallelize model execution across many GPUs.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service