Scale AIposted 16 days ago
$176,000 - $300,000/Yr
Full-time • Mid Level
Seattle, WA

About the position

This role will lead the development of machine learning systems to detect fraud, abuse, and trust violations across Scale’s contributor platform. As a core part of our Generative AI data engine, these systems are critical to ensuring the quality, safety, and reliability of the data used to train and evaluate frontier models. You will build scalable ML services that analyze behavioral and content signals, incorporating both classical models and advanced LLM-based techniques. This is a high-impact, product-focused role where you’ll collaborate across engineering, product, and operations teams to proactively surface misuse, defend against adversarial behavior, and ensure the long-term health of our human-in-the-loop data workflows.

Responsibilities

  • Design and deploy machine learning models to detect fraud, quality issues, and violations in large-scale contributor workflows
  • Build real-time and batch detection systems that evaluate account, behavioral, and content-level signals
  • Combine traditional ML techniques with LLMs and neural networks to improve detection capabilities and reduce false positives
  • Create robust evaluation frameworks and actively tune for extremely imbalanced detection scenarios
  • Collaborate closely with product and engineering teams to embed detection systems into contributor-facing workflows and backend infrastructure

Requirements

  • 3+ years of experience building and deploying ML models in production environments
  • Experience with trust & safety, fraud detection, abuse prevention, or adversarial modeling in a real-world setting
  • Proficiency in ML and deep learning frameworks such as scikit-learn, PyTorch, TensorFlow, or JAX
  • Familiarity with LLMs and experience applying foundation models for structured downstream tasks
  • Strong software engineering fundamentals and experience building ML systems in microservice architectures (e.g., using AWS or GCP)
  • Excellent communication skills and a proven ability to work cross-functionally

Nice-to-haves

  • Hands-on experience designing or scaling trust & safety detection systems
  • Familiarity with data quality pipelines or contributor platform risk analysis
  • Contributions to open-source LLM fine-tuning efforts or internal LLM alignment projects
  • Research or published work in top ML venues (e.g., NeurIPS, ICML, ICLR, ACL, EMNLP)

Benefits

  • Comprehensive health, dental and vision coverage
  • Retirement benefits
  • Learning and development stipend
  • Generous PTO
  • Commuter stipend (may be eligible)
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service