About The Position

The Policy & Safety team sits within Content Platform in the Experience Mission, building the systems that keep Spotify safe, compliant, and trusted by millions of users and creators. This team owns Spotify’s content moderation infrastructure — from detection models to policy enforcement systems and compliance data pipelines. Working at the intersection of machine learning, platform engineering, and regulatory compliance, the team partners closely with Trust & Safety, Legal, and Public Affairs. They’re on the critical path for every new content type and social feature — including messaging, comments, and collaborative experiences — ensuring safety is built in from day one. With a strong focus on “safety by default,†the team is investing in large-scale rearchitecture and ML-driven systems to proactively protect users and empower safer interactions across the platform.

Requirements

  • Solid experience building and deploying machine learning systems in production environments at scale
  • Experienced with training, evaluating, and maintaining ML models using modern frameworks such as PyTorch
  • Deep understanding of machine learning evaluation, including dataset design, metrics, and continuous improvement systems
  • Know how to design systems that balance performance, reliability, and real-world impact in high-stakes domains
  • Care about building safe, responsible, and user-centric ML systems
  • Comfortable working across disciplines, partnering with legal, policy, and product stakeholders
  • Experience leading technical projects and influencing direction within a team or product area
  • Experience with distributed systems or backend technologies (e.g., Scala)

Responsibilities

  • Design, build, and ship production-grade machine learning systems that power content safety and policy enforcement at Spotify scale
  • Own and lead key technical initiatives across detection, classification, and policy evaluation systems
  • Develop and maintain ML models for content moderation, including multimodal and LLM-based systems
  • Build robust evaluation frameworks, including standardized datasets, offline and online metrics, and continuous improvement loops
  • Drive experimentation to improve model performance, reliability, and fairness in safety-critical systems
  • Collaborate closely with cross-functional partners in Trust & Safety, Legal, and Public Affairs to align on policy and enforcement needs
  • Provide technical leadership within the team, mentoring engineers and contributing to ML strategy and prioritization
  • Represent technical decisions and trade-offs in stakeholder discussions and influence product direction

Benefits

  • health insurance
  • six-month paid parental leave
  • 401(k) retirement plan
  • monthly meal allowance
  • 23 paid days off
  • paid flexible holidays
  • paid sick leave
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service