The Alignment Science team at OpenAI studies the science of intent alignment: how to train models to understand what users are actually asking for, act faithfully on that intent while respecting safety constraints, verify what they did, and report their limitations honestly. Our work sits alongside broader value alignment efforts, but this team focuses on scalable methods for ensuring instruction-following, honesty, and robustness as models become more capable. We work on both sides of alignment research: producing externally publishable results and integrating promising techniques into the models OpenAI deploys. Recent team research on model confessions studies how models can be trained to honestly report shortcomings after their original answer, including failures involving hallucination, instruction following, scheming, and reward hacking. That work reflects a broader agenda: build scalable and general methods to ensure models follow human intent. The team uses a mix of training and evaluation methods, with a focus on reinforcement learning. We care about rigorous, quantitative research that can translate into safer model behavior. As a Research Engineer / Research Scientist on the Alignment team, you will design and run experiments that help increasingly capable models follow user intent, remain calibrated about correctness and risk, and honestly surface their own mistakes. You will work on hands-on model training, evaluation design, and research infrastructure, while helping turn promising alignment methods into techniques that can be used in frontier model development.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Education Level
No Education Listed
Number of Employees
1-10 employees