Researcher, Misalignment Research

OpenAISan Francisco, CA

About The Position

OpenAI is seeking a Senior Researcher passionate about red-teaming and AI safety to join the Safety Systems team. This team is dedicated to ensuring the responsible development and deployment of safe AGI, focusing on identifying, quantifying, and understanding future AGI misalignment risks well in advance. The research taskforce operates across four pillars: Worst-Case Demonstrations, Adversarial & Frontier Safety Evaluations, System-Level Stress Testing, and Alignment Stress-Testing Research. In this role, you will design and execute cutting-edge attacks, build adversarial evaluations, and advance the understanding of how safety measures can fail and how to fix them. Your insights will directly influence OpenAI’s product launches and long-term safety roadmap.

Requirements

  • 4+ years of experience in AI red-teaming, security research, adversarial ML, or related safety fields.
  • Strong research track record—publications, open-source projects, or high-impact internal work—demonstrating creativity in uncovering and exploiting system weaknesses.
  • Fluent in modern ML / AI techniques and comfortable hacking on large-scale codebases and evaluation infrastructure.
  • Ability to communicate clearly with both technical and non-technical audiences, translating complex findings into actionable recommendations.
  • Enjoy collaboration and can drive cross-functional projects that span research, engineering, and policy.

Nice To Haves

  • Ph.D., master’s degree, or equivalent experience in computer science, machine learning, security, or a related discipline.

Responsibilities

  • Design and implement worst-case demonstrations that make AGI alignment risks concrete for stakeholders, focused on high stakes use cases.
  • Develop adversarial and system-level evaluations grounded in those demonstrations, driving adoption across OpenAI.
  • Create automated tools and infrastructure to scale automated red-teaming and stress testing.
  • Conduct research on failure modes of alignment techniques and propose improvements.
  • Publish influential internal or external papers that shift safety strategy or industry practice.
  • Partner with engineering, research, policy, and legal teams to integrate findings into product safeguards and governance processes.
  • Mentor engineers and researchers, fostering a culture of rigorous, impact-oriented safety work.

Benefits

  • Health insurance
  • Dental insurance
  • Vision insurance
  • Life insurance
  • Disability insurance
  • 401k
  • Paid holidays
  • Flexible scheduling
  • Professional development
  • Learning and development program
  • Employee discount programs
  • Wellness programs
  • Diversity programs
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service