Fullstack Engineer, Safety Engineering

OpenAISan Francisco, CA
25d

About The Position

The Safety Systems org is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society and is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency. The Safety Engineering team builds the platforms and tools that make OpenAI’s models safe to use in the real world. We partner closely with researchers, product teams, and policy to turn safety ideas into reliable, scalable systems: measuring risk, enforcing safeguards, and continuously improving how models behave in production. Our work sits at the intersection of product engineering, data, and AI, and directly shapes how millions of people experience OpenAI’s technology. We’re looking for a self-starter engineer who loves building products in an iterative, fast-moving environment—especially internal tools that unlock real-world impact. In this role, you’ll build full-stack tooling for our Safety Systems teams that directly improves the safety and reliability of OpenAI’s models, including in sensitive areas like mental health and other vulnerable-user protections. Your work will increase the team’s velocity in identifying and fixing safety issues and help tighten the feedback loop between policy, data, and the model training cycle.

Requirements

  • Have 5+ years of relevant engineering experience at tech and product-driven companies
  • Are proficient with JavaScript, React, and other web technologies
  • Are proficient with at least one backend language (we use Python)
  • Have experience with relational databases like Postgres/MySQL
  • Have an interest in AI/ML (direct experience not required)
  • Are excited to partner closely with researchers and policy writers to ship tools that directly improve the safety of OpenAI’s models

Responsibilities

  • Own the end-to-end development of internal tools that help improve the safety of OpenAI’s models (with a focus on areas like mental health and other vulnerable-user protections)
  • Partner closely with Safety Systems researchers, engineers, and model policy creators to understand workflows, pain points, and requirements—and translate them into durable product solutions
  • Build full-stack experiences to support core model policy workflows, such as labeling and inspecting data, analyzing and reviewing failure cases, and surfacing insights for iteration
  • Optimize internal applications for usability, speed, and scale to increase team velocity and reduce time-to-fix for safety issues
  • Transform successful AI assisted safety workflows into external facing safety products, that empowers developers to build safer AI, uplift the industry standard for AI safety, and prepare the world for more capable AGI.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service