Technical Program Manager, Frontier Evals

OpenAISan Francisco, CA
Hybrid

About The Position

OpenAI’s Frontier Evals team designs and builds evaluations that measure the capabilities, limitations, and emerging behaviors of our most advanced models. As a research team, we advance both the science and infrastructure of model evaluation, developing methods, systems, and datasets that help us understand where frontier models succeed, where they fail, and what those results imply for future development and deployment. As a Technical Program Manager on Frontier Evals, you will drive high-priority evaluation and research programs from concept through design, execution, and analysis. This is a hybrid IC and program management role: you will help design evals, build lightweight technical workflows, manage human data campaigns, create project roadmaps, track execution, and coordinate across researchers, engineers, data teams, vendors, and domain experts. You should be comfortable ramping quickly on unfamiliar topics, turning open-ended research questions into concrete plans, and doing the hands-on work required to make progress before perfect infrastructure exists. The right person is operationally strong, highly resourceful, and excited to work on research projects where the path is not already defined. This role is based in San Francisco, CA. We require 3 days in the office per week and offer relocation assistance to new employees.

Requirements

  • Have experience in technical program management, research operations, data operations, evaluation, or a similarly ambiguous technical execution role.
  • Are proficient enough in Python, SQL, or similar tools to analyze datasets, inspect model outputs, automate workflows, and unblock yourself without waiting on engineering support for every step.
  • Have a strong understanding of how large language models work, including prompting, model evaluation, grading, and common failure modes.
  • Are excited to work as both an IC and a program manager: writing analysis scripts one day, aligning stakeholders the next, and then redesigning a data campaign or eval rubric when results reveal a flaw.
  • Can quickly turn vague research goals into clear plans, crisp milestones, owners, risks, and decision points.
  • Are relentlessly resourceful: you find partial, scrappy, technically sound ways to make progress while helping teams build more scalable systems over time.
  • Communicate clearly with technical and non-technical stakeholders, especially when explaining tradeoffs, uncertainty, quality risks, and research findings.
  • Learn new technical domains quickly and enjoy context switching across multiple high-priority projects.
  • Care deeply about building rigorous evaluations that help OpenAI understand and safely deploy increasingly capable models.

Responsibilities

  • Manage frontier evaluation projects from initial research questions to delivered benchmarks.
  • Partner with researchers and engineers to translate ambiguous model capability questions into concrete eval designs, success metrics, timelines, and execution plans.
  • Design and manage human data campaigns, including task design, trainer or expert instructions, and quality control workflows.
  • Do hands-on technical work where needed, including prompt iteration, model-based evaluation workflows, data analysis, lightweight scripting, dashboarding, and debugging eval pipelines.
  • Build roadmaps and operating rhythms that keep fast-moving research efforts aligned and unblocked.
  • Coordinate across research, engineering, human data, product, safety, legal, external vendors, and domain experts to deliver high-quality evals under tight timelines.
  • Ramp quickly on new domains and project areas, identifying what needs to be learned, who needs to be involved, and whatever is required to complete the project.

Benefits

  • relocation assistance
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service