3P Architect

OpenAISan Francisco, CA
Hybrid

About The Position

OpenAI’s Hardware organization develops system and infrastructure solutions tailored to the demands of advanced AI workloads. We work across the full stack—from silicon to system integration—partnering closely with internal teams and external vendors to define and deliver next-generation AI infrastructure. Our team focuses on defining scalable, high-performance system architectures and reference designs that balance performance, cost, and operational efficiency across rapidly evolving technologies. We are seeking a 3P Architect to define and drive rack- and cluster-level reference designs in collaboration with external partners. This role is responsible for translating workload requirements and system-level goals into concrete architectures, aligning partners on critical design attributes, and ensuring vendor roadmaps meet our infrastructure needs. You will work closely with performance modeling and internal architecture teams to evaluate tradeoffs, while owning the end-to-end definition and execution of third-party system designs. This includes identifying gaps in current technologies, driving vendor development, and shaping future infrastructure capabilities. This role requires strong system intuition, cross-functional leadership, and the ability to operate effectively across internal teams and external ecosystems.

Requirements

  • Have strong experience in system architecture for large-scale infrastructure or data center environments.
  • Understand AI workload characteristics and how they map to system-level design decisions.
  • Are comfortable working with performance modeling outputs to inform architectural direction.
  • Have experience working with or managing hardware vendors (ODM/JDM, silicon, networking).
  • Can drive alignment across multiple stakeholders with competing constraints.
  • Have a track record of turning ambiguous requirements into clear, executable system designs.
  • Are proactive in identifying gaps and driving solutions across organizational boundaries.

Nice To Haves

  • Experience defining rack- or cluster-level systems for hyperscale or AI workloads.
  • Familiarity with accelerators (GPUs/ASICs), interconnects, and data center networking architectures.
  • Experience influencing vendor roadmaps and reference designs.
  • Background in infrastructure deployment, hardware engineering, or systems integration.
  • Experience leading PoCs or early-stage hardware validation efforts.

Responsibilities

  • Define rack- and cluster-level reference architectures for AI infrastructure deployments.
  • Translate workload requirements into clear system design specifications and partner deliverables.
  • Collaborate with performance modeling teams to evaluate architectural tradeoffs and system behaviors.
  • Align internal stakeholders and external partners on critical system attributes (performance, cost, power, reliability, scalability).
  • Identify gaps in current technology offerings and drive vendors (ODM/JDM, silicon, networking) to close those gaps.
  • Influence and shape vendor roadmaps to meet future infrastructure needs.
  • Track emerging technologies and evaluate their applicability to AI systems.
  • Define and lead proof-of-concept (PoC) efforts to validate new architectures and technologies.
  • Act as a key interface between OpenAI and external partners, ensuring execution against design intent.

Benefits

  • relocation assistance
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service