Trust and Safety - Senior Data Scientist

Scale AISan Francisco, CA
11h

About The Position

Scale is at the frontier of Generative AI and human-AI collaboration. The Gen AI Ops Trust and Safety team protects contributor integrity across a marketplace of hundreds of thousands of contributors training foundation models. We're looking for a modeling-focused Data Science Lead who ships fast, thinks in systems, and uses AI coding tools as a core part of their workflow. This is a high-autonomy IC role. You will own fraud and abuse detection models end-to-end — from label definition through feature engineering, training, evaluation, and production deployment. You'll work in a small team that operates at 10x velocity by pairing deep analytical judgment with AI-augmented development (Cursor, Claude Code). If you've felt limited by teams that move slowly or separate "analysis" from "building," this role eliminates that gap entirely.

Requirements

  • 5–8 years in Data Science or Machine Learning, with at least one production fraud/abuse/integrity model shipped end to end (labels → features → deployment → iteration).
  • Strong proficiency with AI coding assistants. You should already be using these tools daily and understand how to leverage them for rapid SQL/Python development, data exploration, and code generation — not as a novelty but as a core workflow.
  • Deep feature engineering instinct — you see a messy behavioral log and immediately think about what signals it contains, how to extract them, and whether they'll hold up adversarially.
  • Expert SQL and Python. Comfortable writing 200+ line CTEs, building data pipelines, and working with large-scale event data.
  • Experience with unsupervised methods (clustering, anomaly detection) alongside supervised classification. Bonus if you've worked with semi-supervised approaches for noisy labels.
  • Comfort operating across the full stack: Snowflake DDLs, Python modeling, rule engines, alerting systems, Google Docs/Sheets automation, workflow orchestration.
  • Clear, direct communication. You can explain a model's precision/recall tradeoff to an ops lead and debug a Snowflake query in the same afternoon.

Nice To Haves

  • Marketplace or gig-economy platform experience.
  • Experience building linkage/graph-based detection.
  • Familiarity with identity verification vendors and their signal taxonomies.
  • Experience building automated alerting and monitoring pipelines.

Responsibilities

  • Own the model lifecycle — from label definition through feature engineering, training, evaluation, and production deployment. Ground truth is ambiguous and non-binary; you'll need to make it work anyway.
  • Build and extend the feature store — design and productize signals across identity, behavioral, and third-party data sources that power detection models reliably at scale.
  • Design and ship detection systems — not just notebooks. Rules, clustering, anomaly detection — all the way to production decisions.
  • Move fast with AI tools — use AI-assisted IDE and copilots daily to prototype pipelines, run investigations, and automate workflows. This is how the team operates.
  • Partner with Ops and Engineering to close the loop between model output and real-world actions, and continuously improve the contributor experience.

Benefits

  • Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO.
  • this role may be eligible for additional benefits such as a commuter stipend.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

1,001-5,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service