Generative AI Analyst - Chinese

WelocalizeSan Francisco, CA
Onsite

About The Position

We are hiring onsite Generative AI Analysts in San Jose, CA to perform annotation efforts of Chinese (zh-CN) multimedia and language data labeling and review work (e.g., video, images, and related metadata). This role focuses on annotation, data quality, prompt evaluation, and the creation of high-quality datasets used to train and improve generative AI models.

Requirements

  • Native level proficiency in Chinese (zh-CN) and English with excellent written and verbal communication skills.
  • Strong attention to detail and ability to follow complex annotation guidelines
  • Interest in generative AI, machine learning, and emerging AI technologies
  • Ability to be onsite at client headquarters (San Jose, CA) full time.
  • Bachelor’s degree or equivalent practical experience

Nice To Haves

  • Experience with data annotation, content evaluation, labeling operations, or quality review workflows preferred
  • Domain expertise in Finance, STEM, Legal, Medical, Coding, or other specialized fields is a plus
  • Familiarity with generative AI systems, large language models, RLHF, or multimodal AI workflows
  • Domain knowledge in Law/Medical/Math/Coding/etc.
  • Bilingual proficiency preferred (Korean, Japanese, Mandarin, Spanish, German, or French)
  • Experience annotating prompts, responses, images, video sequences, or other AI training data
  • QA/testing experience within AI, data operations, or content review environments
  • Experience working with evaluation rubrics, taxonomy development, or dataset quality initiatives
  • Python or scripting experience is a plus

Responsibilities

  • Perform annotation and labeling tasks for Chinese language (zh-CN) generative AI datasets, including text, image, video, and multimodal content
  • Create, review, and evaluate prompts and responses across a variety of domains and use cases
  • Conduct quality assurance reviews to ensure annotation accuracy, consistency, and adherence to guidelines
  • Assist in developing and refining annotation guidelines and evaluation criteria
  • Support data collection, categorization, tagging, and evaluation workflows for machine learning systems
  • Identify edge cases, inconsistencies, and quality issues within datasets and model outputs
  • Collaborate with internal teams and external partners to improve annotation quality and operational efficiency
  • Provide feedback on workflows, tooling, and annotation processes
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service