Program Manager, Data Quality

NuroMountain View, CA

About The Position

This is a rare opportunity to own something that genuinely matters. You will co-own a portfolio of active labeling pipelines alongside senior leadership with real authority to set the quality standard, measure against it, and close the gaps. Not to report on pipelines. To make them better. The work connects directly to Nuro's mission. A single systematic flaw in annotation can propagate silently through training and surface as a safety regression on the road. You are the person who finds it before that happens and who builds the systems that prevent it from happening again. That kind of upstream impact is hard to find in most roles. Here, it's the whole job. You'll be well-supported: partnering closely with senior engineering and operations leadership, with the access and visibility to do your best work. What we ask in return is curiosity, rigor, and genuine care about getting it right.

Requirements

  • 5+ years embedded with ML, data operations, or software engineering teams close to the work, not managing from a distance.
  • SQL fluency: you can investigate a labeling anomaly yourself, form a hypothesis, and test it without waiting on a data engineer.
  • Deep experience with ML data pipelines and labeling ecosystems annotation workflows, quality sampling, taxonomy design, and inter-annotator agreement.
  • A systems-level mindset: you identify where quality breaks down structurally and design the mechanism that fixes it, not just the process that patches it.
  • Clear, confident communication: you can translate a nuanced data quality finding into a precise safety or business risk that senior leadership can act on.
  • Experience managing large-scale offshore or globally distributed annotation teams.

Nice To Haves

  • Background in autonomous vehicles, robotics, computer vision, or ML model training.
  • Prior experience in ML engineering, data engineering, or technical consulting.
  • A demonstrated track record of improving training data quality at scale, with metrics to show for it.
  • Bachelor's degree in a technical or business discipline, or equivalent practical experience.

Responsibilities

  • Own the quality diagnostic layer across our labeling pipelines: defining the standard, building the instrumentation, and closing the gaps that matter most to model safety and performance.
  • Define what 'good' looks like for each data type across active labeling pipelines and instrument pipelines to measure against it continuously, not just at delivery milestones.
  • Build inter-annotator agreement frameworks, taxonomy governance, and sampling methodologies that hold up at offshore production scale.
  • Design scalable processes to reduce systematic errors and support evolving ML training requirements.
  • Audit live workflows, query production databases, and trace accuracy failures to their structural root cause then return with an evidence-based plan that fixes the mechanism, not just the symptom.
  • Apply statistical process control thinking to distinguish a labeling error from a labeling system error, and drive the changes that address it.
  • Connect ML labeling quality metrics directly to model performance and safety outcomes in close partnership with Autonomy Engineering leadership.
  • Build executive-ready reporting that frames quality gaps as safety and business signals — not operational updates.
  • Drive alignment across engineering, product, and global ops with clear analysis and well-reasoned recommendations.
  • Manage the processes for quality management/processes for the offshore annotation team.
  • Being the bridge: the person ML Operations leans on for quality diagnostics, and the person Engineering trusts to understand how a labeling decision upstream shapes what the model learns downstream.

Benefits

  • annual performance bonus
  • equity
  • competitive benefits package
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service