OpenAI-posted 11 days ago
Full-time • Mid Level
Hybrid • San Francisco, CA
1,001-5,000 employees

The Agent Infrastructure team at OpenAI is responsible for building systems that enable training and deployment of highly useful AI agents, both internally and for the world. We work hand-in-hand with researchers to design and scale the environment in which agentic models are trained – providing a workspace for AI models to execute code, debug issues, and develop software just as human SWEs do. Our training environment for agentic models operates at an extremely high scale and has the flexibility to emulate any environment in which an agent might work. At the same time, our team builds and maintains OpenAI’s core platform for the deployment and execution of agents in production. Our systems power products such as Codex, Operator, tool use in ChatGPT, and future agentic products. Some of the most challenging technical problems in scaling the capabilities and utility of agents and agentic models lie in the infrastructure layer – and our team is focused on building the research and production systems that enable OpenAI to train the most capable models in the world, and maximize the utility of our agentic products for users around the world. As a Software Engineer on the Agent Infrastructure team, you will have the opportunity to work closely with both research and product at OpenAI - building and scaling systems to train highly capable agentic models, and building the platform and integrations to launch new agents to hundreds of millions of users worldwide. Your work will consist of both building new capabilities - standing up the infrastructure and integrations needed to train more complex agentic models - and rapidly scaling these new capabilities to some of the largest compute clusters in the world. At the same time, you’ll be instrumental to the launch of agentic products at OpenAI - building, maintaining, and scaling the production platform on which all agents run. We’re looking for people with deep experience building AI infrastructure and who are used to working closely with researchers to build high-performance systems at massive scale for novel use cases. This role is based in San Francisco, CA or New York City, NY. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.

  • Push massive compute clusters to their limits. You will be a core contributor to a novel container orchestration platform built in-house by our team to scale far beyond what’s possible with systems like Kubernetes.
  • Develop and maintain FastAPI and gRPC APIs that serve as the interface for our agentic infrastructure used both in training and production.
  • Use Terraform to stand up and evolve complex infrastructure for both research and production.
  • Collaborate with research teams to stand up and optimize systems for novel AI training runs and experimental applications.
  • Have deep experience working on large-scale machine learning infrastructure. You know how to reason about training at scale, identifying bottlenecks and engineering solutions to optimize system performance in training environments.
  • Know how to build new things from 0-1 quickly, and then scale them 1,000,000x.
  • Have a keen eye for performance and optimization. You know how to squeeze the most performance out of complex, globally-distributed systems.
  • Know your way around cloud platforms and work with infrastructure-as-code tech like Terraform.
  • Are driven by solving complex, ambiguous problems at the intersection of infrastructure scalability, virtualization efficiency, and agentic capabilities.
  • Have deep technical expertise in virtualization and containerization technologies (e.g. Kata, Firecracker, gVisor, Sysbox) and are passionate about optimizing runtime performance.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service