Researcher, Alignment Science

OpenAISan Francisco, CA
Hybrid

About The Position

The Alignment Science team at OpenAI studies the science of intent alignment: how to train models to understand what users are actually asking for, act faithfully on that intent while respecting safety constraints, verify what they did, and report their limitations honestly. Our work sits alongside broader value alignment efforts, but this team focuses on scalable methods for ensuring instruction-following, honesty, and robustness as models become more capable. We work on both sides of alignment research: producing externally publishable results and integrating promising techniques into the models OpenAI deploys. Recent team research on model confessions studies how models can be trained to honestly report shortcomings after their original answer, including failures involving hallucination, instruction following, scheming, and reward hacking. That work reflects a broader agenda: build scalable and general methods to ensure models follow human intent. The team uses a mix of training and evaluation methods, with a focus on reinforcement learning. We care about rigorous, quantitative research that can translate into safer model behavior. As a Research Engineer / Research Scientist on the Alignment team, you will design and run experiments that help increasingly capable models follow user intent, remain calibrated about correctness and risk, and honestly surface their own mistakes. You will work on hands-on model training, evaluation design, and research infrastructure, while helping turn promising alignment methods into techniques that can be used in frontier model development.

Requirements

  • Strong hands-on experience training, evaluating, or debugging large ML models, especially LLMs.
  • Excellent engineering skills in Python and modern ML frameworks such as PyTorch.
  • Mathematical rigor, quantitative taste, and comfort turning ambiguous research questions into measurable experiments.
  • Experience with reinforcement learning, post-training, preference optimization, scalable oversight, model evaluation, or adjacent empirical ML research.
  • Ability to operate with high independence and do not need close day-to-day handholding.
  • Enjoy fast-paced, collaborative research environments where priorities shift as models and evidence change.
  • Strong record in technical problem solving, such as competitive programming, math contests, systems work, or similarly rigorous engineering and research projects.
  • Care about building AI systems that are trustworthy, honest, and reliable in high-stakes settings.
  • Motivated by making concrete progress on alignment methods that can be tested, trained, published, and deployed.

Responsibilities

  • Design and implement alignment experiments focused on intent following, honesty, calibration, and robustness.
  • Train and evaluate models using reinforcement learning, and other empirical ML methods.
  • Develop evaluations for failure modes such as hallucination, instruction-following failures, reward hacking, covert actions, and scheming.
  • Study methods that encourage models to verify their behavior and report shortcomings honestly, including confession-style training objectives.
  • Build monitoring and inference-time interventions that ensure compliant behavior or surface model issues to users or downstream systems.
  • Investigate how alignment methods scale with model capability, compute, data, context length, action length, and adversarial pressure.
  • Integrate successful techniques into model training and deployment workflows.
  • Produce externally publishable research when results advance the broader science of alignment.
  • Collaborate with researchers and engineers across post-training, RL, evaluations, safety, and product-facing teams.

Benefits

  • Relocation assistance to new employees.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

No Education Listed

Number of Employees

1-10 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service