The Applied Foundations team at OpenAI is dedicated to ensuring that our cutting-edge technology is not only revolutionary, but also secure from a myriad of adversarial threats. We strive to maintain the integrity of our platforms as they scale. Our team is at the front lines of defending against financial abuse, scaled attacks, and other forms of misuse that could undermine the user experience or harm our operational stability The Integrity pillar within Applied Foundations is responsible for the scaled systems that help identify and respond to bad actors and harm on OpenAI’s platforms. As the systems that address some of our most severe usage harms become more mature, we’re adding data scientists to help us measure robustly the prevalence of these problems and the quality of our response to them. We are looking for experienced trust and safety data scientists to help us improve, productionise and monitor measurement for complex, actor- and sometimes network-level harms. A data scientist in this role will own measurement and metrics across several established harm verticals, including estimating prevalence for on-platform (and sometimes off-platform!) harm, and analyses to identify gaps and opportunities in our responses. This role is based out of our San Francisco or New York office and may involve resolving urgent escalations outside of normal work hours. Many harm areas may involve sensitive content, including sexual, violent, or otherwise-disturbing material.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed