Program Manager, Data Quality

NuroMountain View, CA

About The Position

Nuro is seeking a Program Manager, Data Quality to join their ML Operations team. This role is crucial for ensuring the accuracy and reliability of the data used to train self-driving vehicles. The position involves investigating annotation faults, managing quality processes for offshore annotation teams, and acting as a liaison between Autonomy Engineering and ML Operations. The ideal candidate will have a strong understanding of ML data pipelines, SQL fluency, and a systems-level mindset to identify and fix structural quality issues. This role offers a unique opportunity to significantly impact Nuro's mission of making autonomy accessible and safe.

Requirements

  • 5+ years embedded with ML, data operations, or software engineering teams.
  • SQL fluency.
  • Deep experience with ML data pipelines and labeling ecosystems: annotation workflows, quality sampling, taxonomy design, and inter-annotator agreement.
  • A systems-level mindset: identify where quality breaks down structurally and design the mechanism that fixes it.
  • Clear, confident communication: translate nuanced data quality findings into precise safety or business risks for senior leadership.
  • Experience managing large-scale offshore or globally distributed annotation teams.
  • Bachelor's degree in a technical or business discipline, or equivalent practical experience.

Nice To Haves

  • Background in autonomous vehicles, robotics, computer vision, or ML model training.
  • Prior experience in ML engineering, data engineering, or technical consulting.
  • A demonstrated track record of improving training data quality at scale, with metrics to show for it.

Responsibilities

  • Investigate annotation faults by querying databases directly to identify root causes.
  • Translate model accuracy regressions into specific labeling workflow problems and scope fixes.
  • Audit live pipelines to spot systematic edge-case misclassifications and build cases for restructuring labeling.
  • Manage the processes for quality management for the offshore annotation team.
  • Serve as the primary contact for quality diagnostics in ML Operations and for Engineering regarding labeling decisions' impact on model learning.
  • Define what 'good' looks like for each data type across active labeling pipelines and instrument pipelines to measure against it continuously.
  • Build inter-annotator agreement frameworks, taxonomy governance, and sampling methodologies for offshore production scale.
  • Design scalable processes to reduce systematic errors and support evolving ML training requirements.
  • Audit live workflows, query production databases, and trace accuracy failures to their structural root cause.
  • Apply statistical process control thinking to distinguish labeling errors from labeling system errors and drive corrective changes.
  • Connect ML labeling quality metrics directly to model performance and safety outcomes in partnership with Autonomy Engineering leadership.
  • Build executive-ready reporting that frames quality gaps as safety and business signals.
  • Drive alignment across engineering, product, and global ops with clear analysis and well-reasoned recommendations.

Benefits

  • Annual performance bonus
  • Equity
  • Competitive benefits package
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service