About The Position

Protection Science Engineering is an interdisciplinary role mixing data science, machine learning, investigation, and policy/protocol development. As a Protection Scientist Engineer within Integrity and Investigations, you will be responsible for designing and building systems to proactively identify and enforce on abuse on OpenAI’s products. This includes ensuring we have robust abuse monitoring in place for new products, sustaining monitoring for existing products, and prototyping and incubating systems of defense against our highest risk harms. You will also respond to and investigate critical escalations, especially those that are not caught by our existing safety systems. This will require expert understanding of our products and data, and involves working cross-functionally with product, policy, and engineering teams. This role can be based in either our San Francisco, DC or NY office and includes participation in an on-call rotation that will involve resolving urgent escalations outside of normal work hours. Some investigations may involve sensitive content, including sexual, violent, or otherwise-disturbing material.

Requirements

  • At least 4 years of experience doing technical analysis and detection, especially using SQL and Python.
  • Experience in trust and safety and/or have worked closely with policy, enforcement, and engineering teams.
  • An investigative mindset is key.
  • Experience with basic data engineering, such as building core tables or writing data pipelines in production, and with machine learning principles and execution.
  • Basic software development skills are a plus as this role writes productionised code.
  • Experience scaling and automating processes, especially with language models.

Responsibilities

  • Scope and implement abuse monitoring requirements for new product launches.
  • Improve processes to sustain monitoring operations for existing products, including developing approaches to automate monitoring subtasks.
  • Prototype and mature into production systems of detection, review, and enforcement of abuse for major harms.
  • Work with Product, Policy, Ops, and Investigative teams to understand key risks and how to address them, and with Engineering teams to ensure we have sufficient data and scaled tooling.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service