The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit society and is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency. The Safety Research team aims to fundamentally advance our capabilities for precisely implementing robust, safe behavior in AI models and systems. As capabilities continue to advance, it is imperative that our approaches to safety continue to improve and scale to address evolving risks. This is important both for ensuring our systems are robust to prevent harmful misuse as well as ensuring potential misalignment cannot cause harm. We are working on these problems in a way that is grounded in our current models and methods but that generalizes to future systems. We are growing our team to expand our research on methods that will improve safety for AGI and beyond. This will include exploratory research for example, new methods to improve safety common sense and generalizable reasoning, developing new evaluations to elicit or detect misalignment or inner goals of the AI, and new methods to support human oversight of long-running tasks.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Education Level
Ph.D. or professional degree
Number of Employees
1,001-5,000 employees