Researcher, Agentic Post-Training

OpenAISan Francisco, CA

About The Position

OpenAI is looking for exceptional researchers to join the Post-Training Frontiers team, which is responsible for post-training the agentic models shipped across Codex, the API, ChatGPT Thinking, and ChatGPT Pro. The team sets up the pipeline for deciding which integrations can go into the post-training run, develops its own horizontal improvements to the model, and trains the final model. The role requires working on the most impactful horizontal improvements for the next model, which could include factuality, instruction following, function calling, multi-agent collaboration, calibrated reasoning effort, tool use, or improving taste in models. This could involve building or improving the grading stack, improving the user-data flywheel, or automating processes to make large post-training runs faster, more reliable, and easier for researchers to use. This team is for individuals who want their work to directly impact models used by hundreds of millions of people. The ideal candidate is deeply technical, highly independent, goal-oriented rather than method-oriented, and excited by the high-agency work of turning research ideas into production model behavior.

Requirements

  • Have strong ML fundamentals and hands-on experience with LLMs, RL, RLHF, post-training, evals, or model training.
  • Are an unusually strong engineer who can move quickly in complex systems and make pragmatic technical decisions.
  • Can own ambiguous problems end-to-end without needing a tightly specified roadmap.
  • Care more about impact than method, and are happy to do unglamorous but load-bearing work when it matters.
  • Have excellent taste in model behavior and can reason about what “good” looks like across many user-facing domains.
  • Are comfortable working across research, infrastructure, data, evals, and product boundaries.
  • Are excited to train and ship the frontier agentic models that power Codex, ChatGPT, and the API.

Nice To Haves

  • Experience with large-scale model training or RL systems.
  • Experience building evals, graders, reward models, or data pipelines for LLM training.
  • Experience with coding agents, tool-using agents, browser/computer-use agents, function calling, or multi-agent systems.
  • Background in quant, systems, infra, or other environments where you built reliable machinery for high-stakes experimentation.
  • Evidence of strong product taste, especially around writing, design, code generation, or agent workflows.

Responsibilities

  • Own end-to-end research and engineering projects that improve the final post-training of OpenAI’s agentic models.
  • Decide, together with partner teams, which integrations are ready for inclusion in major model runs.
  • Develop horizontal model improvements across factuality, instruction following, tool/function calling, multi-agent behavior, reasoning-effort calibration, and other broad capabilities.
  • Build and improve training, evaluation, grading, and data infrastructure for large-scale RL/post-training runs.
  • Create evals and diagnostics that help us understand whether a model is ready to ship.
  • Improve the feedback loop from real product usage into post-training, including better ways to learn from implicit user feedback.
  • Collaborate closely with Codex, API, ChatGPT, product, training, and other post-training teams to make frontier models more useful, reliable, and agentic.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

No Education Listed

Number of Employees

1-10 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service