OpenAI is seeking a Senior Researcher passionate about red-teaming and AI safety to join the Safety Systems team. This team is dedicated to ensuring the responsible development and deployment of safe AGI, focusing on identifying, quantifying, and understanding future AGI misalignment risks well in advance. The research taskforce operates across four pillars: Worst-Case Demonstrations, Adversarial & Frontier Safety Evaluations, System-Level Stress Testing, and Alignment Stress-Testing Research. In this role, you will design and execute cutting-edge attacks, build adversarial evaluations, and advance the understanding of how safety measures can fail and how to fix them. Your insights will directly influence OpenAI’s product launches and long-term safety roadmap.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Number of Employees
1-10 employees