Data Scientist, Safety

OpenAISan Francisco, CA
$230,000 - $325,000Hybrid

About The Position

OpenAI’s Safety teams work to ensure our products are safe, trusted, and resilient as frontier AI systems scale globally. We tackle some of the company’s most important challenges across understanding and preventing misuse and misalignment, intercepting fraud and abuse, and protecting vulnerable users. We are hiring Data Scientists to help build the analytical foundations that allow OpenAI to deploy increasingly capable AI responsibly. This is a high-impact role operating at the intersection of product, safety, policy, and research. As a Data Scientist, Safety, you will help solve complex and ambiguous problems where rigorous analysis directly informs critical decisions. Depending on your background and team alignment, you may work on areas such as: Measure harmful or abusive behavior across OpenAI’s products, Detect fraud, manipulation, coordinated misuse, Evaluate and improve safety classifiers, rules systems, mitigation systems, and human review workflows, Design experiments and causal analyses to understand product, policy, and mitigation impacts, Build prevalence estimators, dashboards, monitoring systems, and executive decision frameworks, Diagnose gaps in safety and integrity systems using behavioral and product data, and help quantify and navigate false positive / false negative tradeoffs, Translate ambiguous safety risks into measurable problems and evidence-based recommendations, Partner with Product, Engineering, Policy, Research, and Operations teams to improve safety outcomes, Build zero-to-one analytical systems in rapidly evolving domains.

Requirements

  • Strong statistical reasoning and analytical judgment
  • Experience with experimentation, causal inference, or observational analysis
  • Strong SQL and Python skills
  • Experience working with messy, incomplete, or noisy datasets
  • Ability to structure open-ended business or risk problems
  • Excellent communication with technical and non-technical stakeholders
  • High ownership and comfort operating independently

Nice To Haves

  • Trust & Safety / Integrity
  • Fraud & abuse
  • Security analytics
  • AI/ML model measurement and evaluation
  • Alignment and AI safety research
  • Biosecurity, synthetic biology, infectious diseases, or computational biology

Responsibilities

  • Measure harmful or abusive behavior across OpenAI’s products
  • Detect fraud, manipulation, coordinated misuse
  • Evaluate and improve safety classifiers, rules systems, mitigation systems, and human review workflows
  • Design experiments and causal analyses to understand product, policy, and mitigation impacts
  • Build prevalence estimators, dashboards, monitoring systems, and executive decision frameworks
  • Diagnose gaps in safety and integrity systems using behavioral and product data, and help quantify and navigate false positive / false negative tradeoffs
  • Translate ambiguous safety risks into measurable problems and evidence-based recommendations
  • Partner with Product, Engineering, Policy, Research, and Operations teams to improve safety outcomes
  • Build zero-to-one analytical systems in rapidly evolving domains

Benefits

  • Health insurance
  • Dental insurance
  • Vision insurance
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service