Lead Generative AI Analyst - Chinese

WelocalizeSan Francisco, CA
Onsite

About The Position

We are hiring an onsite Lead Generative AI Analyst in San Jose, CA to lead day-to-day execution of Chinese (zh-CN) multimedia and language data labeling and review work (e.g., video, images, and related metadata). This role serves as the primary liaison between annotators, quality stakeholders, and internal project teams, with accountability for throughput, quality performance, and guideline consistency.

Requirements

  • Native level proficiency in Chinese (zh-CN).
  • 5+ years of experience in data annotation, multimodal data labeling, computer vision labeling, content QA, or a closely related field, including responsibility for quality and delivery outcomes.
  • 2+ years of experience leading teams (people leadership, workflow leadership, or lead reviewer responsibilities).
  • Demonstrated track record of delivering high-quality outputs on schedule in a production environment.
  • Strong written and verbal communication in English, including the ability to write clear guidelines and provide actionable feedback.
  • Experience working from structured guidelines and managing complex edge cases with consistent decision-making.
  • Comfortable working with multimedia content for extended periods while maintaining accuracy and attention to detail.
  • Ability to be onsite at client headquarters full time.

Nice To Haves

  • Experience leading annotation programs involving temporal labeling (event boundaries, segmenting, tracking, time-based attributes).
  • Experience with QA/audit operations (sampling plans, defect taxonomy, root cause analysis, calibration).
  • Familiarity with common annotation paradigms (bounding boxes, segmentation, keypoints, tracking, classification) and annotation tools.
  • Experience coordinating with cross-functional teams (product, research, engineering, vendor partners) to resolve guideline ambiguity and improve throughput/quality.
  • Basic scripting (Python) or data analysis skills to support reporting and workflow optimization.

Responsibilities

  • Lead a team of multimodal annotators and reviewers to deliver work on time and to quality targets.
  • Act as the day-to-day point of contact for workflow questions; triage issues and escalate unclear cases with clear examples.
  • Track team performance and quality metrics (throughput, error rates, rework, backlog) and drive corrective actions.
  • Own calibration and consistency efforts across the team; reduce reviewer-to-reviewer variance through structured reviews and refreshers.
  • Create, maintain, and improve guidelines and specifications; document edge cases and update instructions as workflows evolve.
  • Establish and run quality workflows (sampling, second-pass review, audits) and ensure follow-up actions are completed.
  • Coordinate across internal stakeholders and third-party partners as needed to meet delivery targets.
  • Support onboarding and training for new annotators, including assessments and ongoing coaching.
  • Maintain clear reporting on delivery status, risks, and blockers.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service