The Post-Training team is responsible for training and improving pre-trained models to be deployed into ChatGPT, the API, and future products. The team partners closely with research and product teams across the company, and conducts research as a final step to prepare for real world deployment to millions of users, ensuring that our models are safe, efficient, and reliable. The Science of Post-training team is responsible for advancing the frontier of RLHF. We combine rigorous scientific experimentation with strong technical execution to drive progress in model alignment. Our goal is to develop insights that would make model training more robust and efficient. We contribute to core model deployments like GPT-4.1 and o3, but our main mandate is to pursue foundational research that will guide the trajectory of future model development.