Cybersecurity Landscape Analyst

OpenAIWashington, DC
1d

About The Position

We are looking for a Cybersecurity Landscape Analyst to help OpenAI understand how the external cyber threat environment is evolving—and what it means for our products, customers, and the broader AI ecosystem. This is an outward-facing intelligence and analysis role. The Cybersecurity Landscape Analyst monitors emerging attacker TTPs, threat-group behaviors, infrastructure trends, and real-world cyber innovation at the intersection of AI and all cyber threat surfaces including devices and robotics. Using structured research, competitive intelligence, adversarial thinking, and scenario analysis, you will stress-test assumptions about how frontier AI capabilities could be misused, targeted, or integrated into broader cyber campaigns—even in the absence of active warnings or internal incidents. This role does not conduct internal investigations, run detection on platform data, or own OpenAI’s infrastructure protection or incident response. Instead, this role translates the external cyber landscape into clear risk context, strategic foresight, and decision support for internal stakeholders, with defined handoffs into operational, detection, and security teams. While not the owner of those functions, the role works closely across cross-functional teams, drawing on their operational perspectives to sharpen external analysis, while bringing to them external insights, threat trends, and insights regarding attacker innovation to inform priorities and preparedness. In other words, this role sits at the boundary between external intelligence and internal execution, ensuring bi-directional flow between strategic cyber analysis and the teams responsible for implementation. Your work will synthesize signals from external sources alongside insights from Integrity, Security, and Safety Systems teams to produce crisp strategic assessments, priority questions, and actionable recommendations.

Requirements

  • Have significant experience (typically 5+ years) in cybersecurity intelligence, strategic threat analysis, trust & safety, or national-level cyber risk assessment.
  • Demonstrate deep familiarity with cyber threat actors, intrusion tradecraft, vulnerability exploitation trends, and cybercrime ecosystems.
  • Have experience translating external threat reporting and OSINT into structured risk assessments and executive guidance.
  • Are comfortable using adversarial thinking and foresight methodologies (e.g., horizon scanning, scenario planning, red-teaming) to explore emerging threat vectors.
  • Can clearly distinguish between intelligence analysis and operational security work, and work effectively across that boundary.
  • Are an excellent, credible communicator capable of distilling complex cyber threat dynamics into crisp, decision-relevant insights.
  • Currently hold or are eligible for a U.S. security clearance.

Responsibilities

  • Monitor and interpret the evolving cyber threat landscape
  • Track emerging cyber TTPs, attacker innovation, threat-group behavior, and ecosystem-level shifts relevant to AI systems.
  • Analyze how state actors, criminal networks, hacktivists, and hybrid actors are adapting AI tools—or targeting AI infrastructure.
  • Identify structural risk patterns that may affect AI providers, customers, and downstream sectors.
  • Conduct structured external research and adversarial analysis
  • Use competitive intelligence, red-team style thinking, and scenario methods to explore how frontier AI capabilities could be exploited or targeted.
  • Develop forward-looking assessments of how cyber threats may evolve over 6–24 months.
  • Surface “unknown unknowns” and stress-test prevailing assumptions about attacker incentives, constraints, and capabilities.
  • Translate external signals into strategic risk context for cross-functional teammates
  • Produce concise, executive-ready intelligence estimates that articulate threat relevance, potential impact pathways, and confidence levels.
  • Develop priority questions and structured risk frames that inform product, safety, security, and policy decision-making.
  • Benchmark OpenAI’s risk posture against real-world incidents affecting other AI providers and adjacent technology sectors.
  • Support product and ecosystem readiness
  • Contribute to product reviews and safety readiness processes by outlining plausible cyber-enabled misuse or targeting modes grounded in external analysis.
  • Help shape practical mitigation considerations, with clear handoffs to operational and security teams that own implementation.
  • Represent OpenAI in sensitive external engagements
  • Serve as a credible analytical counterpart in engagements with a range of external partners.
  • Communicate OpenAI’s threat perspective and align on shared risk trends and emerging threat vectors.
  • Support collaboration in ways that complement—without duplicating—incident response, investigations, or core security operations functions.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service