Providing access to powerful AI models introduces a host of challenging questions when it comes to model safety: How do we define safe behavior for how a model should behave? To what end? How do we do this in such a way that is actionable, objective and sustains replicability? This is a senior role in which you’ll help shape policy creation and development at OpenAI and make an impact by helping ensure that our groundbreaking technologies do not create harm. The ideal candidate can identify and develop cohesive and thoughtful taxonomies of harm on high risk topics with a sense of urgency. They can balance internal and external input in making complex decisions, carefully think through trade-offs, and write principled, enforceable policies based on our values. Importantly, this role is embedded in our research teams and directly informs model training. This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Education Level
No Education Listed
Number of Employees
1,001-5,000 employees