Product Policy - National Security

OpenAIWashington, DC
3dHybrid

About The Position

The Product Policy team is responsible for the development, implementation, enforcement, and communication of the policies that govern use of OpenAI’s services, including ChatGPT, GPTs, the GPT store, Sora, and the OpenAI API. As a member of this team, you will be instrumental in developing policy approaches to best enable both innovative and responsible use of AI so that our groundbreaking technologies are truly used to benefit all people. As a member of the Product Policy team, you will work at the intersection of AI capabilities, national security use cases, and risk governance. You will contribute to the development and refinement of OpenAI’s national security usage policies, support operational decision-making on sensitive use cases, and partner closely with technical, legal, investigations, and OpenAI for Government teams to ensure responsible deployment of our systems in high-risk contexts. This role will own significant parts of the core work, operating with autonomy while partnering closely with the manager for strategic direction and prioritization. Much of the foundational infrastructure for this work is already in place; we are seeking a strong executor with relevant domain expertise who can take ownership of this portfolio and drive it forward as the scope and complexity of national security engagement continues to grow. This role is based in Washington, D.C. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.

Requirements

  • Have 4+ years of experience working in national security, defense, intelligence, or adjacent high-risk policy environments, with demonstrated experience with advanced or emerging technologies (including AI-enabled systems).
  • Have experience developing and/or operationalizing risk assessments and policies in partnership with technical and legal teams.
  • Demonstrate the ability to drive alignment across diverse internal and external stakeholders, navigating substantive differences in perspective while balancing principled risk management with pragmatic decision-making.
  • Communicate and engage product managers, engineers, researchers, lawyers, and executives with clarity and credibility.
  • Demonstrate the ability to adapt quickly to new problem spaces and evolving priorities, learning unfamiliar domains as needed and contributing effectively across adjacent work streams as organizational needs shift.

Nice To Haves

  • Eligible for a U.S. security clearance (or equivalent clearance in another NATO country).

Responsibilities

  • Contribute to the ongoing development, refinement, and communication of OpenAI’s approach to governing national security use of OpenAI technology.
  • Support operational reviews of high-risk use cases, including preparing clear, well-structured decision briefs for internal stakeholders on complex or sensitive matters.
  • Partner closely with cross-functional teams to design and support risk assessments, evaluations, and other mechanisms used to assess fitness-for-purpose of AI capabilities in national security contexts.
  • Drive national-security deployments through required safety, risk, and governance processes.
  • Collaborate with investigations and detection teams to define and operationalize principles for identifying, prioritizing, and responding to unauthorized or adversarial national-security use of OpenAI technology.
  • Engage directly with government stakeholders on AI safety and governance, helping both OpenAI and government partners understand emerging risks and responsible deployment.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

1,001-5,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service