The Model Policy team aligns model behavior with desired human values and norms. We co-design policy with models and for models by driving rapid policy taxonomy iteration based on data and defining evaluation criteria for foundational models’ ability to reason about safety. Frontier AI systems are rapidly expanding what is possible in cybersecurity and software engineering. These capabilities create major defensive opportunities, but they also raise serious dual-use and misuse risks across areas such as malware development, exploit discovery, vulnerability chaining, credential abuse, cyber intrusion, and autonomous offensive operations. In this role, you will help define how OpenAI’s models should behave in high-risk cybersecurity contexts. You will develop policy frameworks, threat models, taxonomies, evaluations, and behavioral specifications that guide model behavior across training, deployment, and monitoring systems. This role sits at the intersection of cybersecurity, AI safety, threat modeling, evaluation science, and policy implementation. You will work closely with research, engineering, safety training, preparedness, and product teams to build policies that are technically grounded, measurable, enforceable, and responsive to real-world cyber risk.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Education Level
No Education Listed