The Safety Systems org is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society and is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency. The Safety Engineering team builds the platforms and tools that make OpenAI’s models safe to use in the real world. We partner closely with researchers, product teams, and policy to turn safety ideas into reliable, scalable systems: measuring risk, enforcing safeguards, and continuously improving how models behave in production. Our work sits at the intersection of product engineering, data, and AI, and directly shapes how millions of people experience OpenAI’s technology. We’re looking for a self-starter engineer who loves building products in an iterative, fast-moving environment—especially internal tools that unlock real-world impact. In this role, you’ll build full-stack tooling for our Safety Systems teams that directly improves the safety and reliability of OpenAI’s models, including in sensitive areas like mental health and other vulnerable-user protections. Your work will increase the team’s velocity in identifying and fixing safety issues and help tighten the feedback loop between policy, data, and the model training cycle.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed
Number of Employees
5,001-10,000 employees