The Safety Systems team at OpenAI works to ensure the responsible development and deployment of the company's most capable AI models. This involves evaluations, safeguards, red teaming, deployment decisions, and the systems that help OpenAI understand and reduce risk. The Safety Program Manager will be crucial in streamlining the safety review process, driving safe deployment of new models and products, synthesizing input from various stakeholders (research, product, engineering, legal, policy), and ensuring risks are monitored, mitigated, or resolved. The role is highly cross-functional, collaborating with research, engineering, integrity, product, and strategy teams to ensure thorough safety risk reviews for all launches. The position is based in San Francisco, CA, with a hybrid work model requiring 3 days in the office per week.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level