The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society and is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency. The Pretraining Safety team’s goal is to build safer, more capable base models and enable earlier, more reliable safety evaluation during training. We aim to: Develop upstream safety evaluations that to monitor how and when unsafe behaviors and goals emerge; Create safer priors through targeted pretraining and mid-training interventions that make downstream alignment more effective and efficient Design safe-by-design architectures that allow for more controllability of model capabilities In addition, we will conduct the foundational research necessary for understanding how behaviors emerge, generalize, and can be reliably measured throughout training. The Pretraining Safety team is pioneering how safety is built into models before they reach post-training and deployment. In this role, you will work throughout the full stack of model development with a focus on pre-training: Identify safety-relevant behaviors as they first emerge in base models Evaluate and reduce risk without waiting for full-scale training runs Design architectures and training setups that make safer behavior the default Strengthen models by incorporating richer, earlier safety signals We collaborate across OpenAI’s safety ecosystem—from Safety Systems to Training—to ensure that safety foundations are robust, scalable, and grounded in real-world risks.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed
Number of Employees
1,001-5,000 employees