Human Data Operations Manager

CartesiaSan Francisco, CA
Onsite

About The Position

Our mission is to architect AI that learns from and interacts with the world like humans do. We're pioneering the model architectures that will make this possible. Our founding team met as PhDs at the Stanford AI Lab, where we invented State Space Models or SSMs, a new primitive for training efficient, large-scale foundation models. Our team combines deep expertise in model innovation and systems engineering paired with a design-minded product engineering team to build and ship cutting edge models and experiences. We're funded by leading investors at Index Ventures and Lightspeed Venture Partners, along with Factory, Conviction, A Star, General Catalyst, SV Angel, Databricks and others. We're fortunate to have the support of many amazing advisors, and 90+ angels across many industries, including the world's foremost experts in AI.

Requirements

  • 5+ years in operations, workforce management, or data annotation systems
  • Experience managing large contractor or vendor-based workforces
  • Proven ability to scale operations from zero to production
  • Systems thinking with the ability to design scalable operational frameworks
  • Strong analytical skills with comfort around metrics like inter-rater reliability, precision, and throughput
  • Ability to execute quickly under ambiguity with close attention to quality and edge cases

Nice To Haves

  • Experience in AI/ML data operations or evaluation pipelines
  • Background in audio, speech, or language-related workflows
  • Familiarity with QA systems and annotation tooling
  • Experience with marketplace platforms such as Upwork or Mercor
  • Exposure to multilingual operations

Responsibilities

  • Design and implement workforce structure across languages, skill tiers, and use cases, including evaluators, auditors, and leads for TTS products
  • Build capacity models to support continuous eval pipelines and data production workflows
  • Own relationships with vendors such as data annotation firms and contractor platforms, negotiating rate cards, SLAs, and throughput guarantees
  • Decide on build, buy, or hybrid workforce models and continuously benchmark cost and performance across regions
  • Design multi-layer QA systems spanning self-checks, peer review, audits, and gold tasks
  • Define and track inter-rater reliability, error rates by category, and annotator-level performance distributions
  • Build escalation and retraining workflows to maintain quality at scale
  • Run day-to-day operations including task allocation, throughput tracking, and SLA adherence
  • Build systems to reduce evaluator fatigue, rotate task types, and maintain consistency across large-scale evaluations
  • Partner with tooling teams to improve evaluator UX and with data teams to ensure clean, structured outputs for model training

Benefits

  • Competitive base salary alongside attractive equity package
  • A monthly stipend to help you get to and from the office
  • Flexible PTO. Take as much time as you need to recharge your batteries.
  • Lunch, dinner and plenty of snacks, provided daily.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service