About The Position

The Workload team is responsible for designing and running OpenAI’s LLM training and inference infrastructure that powers frontier models at massive scale. Our systems unify how researchers train and serve models, abstracting away the complexity of performance, parallelism, and execution across vast GPU/accelerator fleets. By providing this foundation, the Workload team ensures that researchers can focus on advancing model capabilities while we handle the scale, efficiency, and reliability required to bring those models to life. We are looking for an engineer to design and implement the dataset infrastructure that powers OpenAI’s next-generation training stack. You will be responsible for building standardized dataset interfaces, scaling pipelines across thousands of GPUs, and proactively testing performance bottlenecks. In this role, you will collaborate closely with the multimodal researchers, and other infra groups to ensure datasets are unified, efficient, and easy to consume.

Requirements

  • Strong engineering fundamentals with experience in distributed systems, data pipelines, or infrastructure.
  • Experience building APIs, modular code, and scalable abstractions, while recognizing that abstractions ultimately serve the users and UX is an important part of the abstractions design.
  • Comfortable debugging bottlenecks across large fleets of machines.
  • Pride in building infrastructure that 'just works,' and find joy in being the guardian of reliability and scale.
  • Collaborative, humble, and excited to own a foundational (if not glamorous) part of the ML stack.

Nice To Haves

  • Background knowledge in data math, probability, or distributed data theory.
  • Experience with GPU-scale distributed systems or dataset scaling for real-time data.

Responsibilities

  • Design and maintain standardized dataset APIs, including for multimodal (MM) data that cannot fit in memory.
  • Build proactive testing and scale validation pipelines for dataset loading at GPU scale.
  • Collaborate with teammates to integrate datasets seamlessly into training and inference pipelines, ensuring smooth adoption and a great user experience.
  • Document and maintain dataset interfaces so they are discoverable, consistent, and easy for other teams to adopt.
  • Establish safeguards and validation systems to ensure datasets remain reproducible and unchanged once standardized.
  • Debug and resolve performance bottlenecks in distributed dataset loading (e.g., straggler systems slowing global training).
  • Provide visualization and inspection tools to surface errors, bugs, or bottlenecks in datasets.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service