About The Position

We are looking for highly detail-oriented Generative AI Analysts to join our team onsite in San Jose, California. In this role, you will contribute to the development of cutting-edge AI technologies by supporting the annotation, evaluation, and quality review of multilingual and multimodal datasets used to train generative AI systems. This position is ideal for candidates passionate about AI, language, data quality, and emerging technologies, with strong analytical skills and native-level Chinese (zh-CN) proficiency.

Requirements

  • Native-level proficiency in Chinese (zh-CN) and strong English communication skills (written and verbal)
  • Excellent attention to detail and ability to follow complex guidelines and processes
  • Strong interest in generative AI, machine learning, and emerging technologies
  • Bachelor’s degree or equivalent practical experience
  • Ability to work onsite full-time in San Jose, CA

Nice To Haves

  • Previous experience in data annotation, content review, quality assurance, or labeling operations is preferred
  • Experience or academic background in Finance, STEM, Legal, Medical, Coding, or other specialized fields is highly valued.
  • Familiarity with generative AI systems, LLMs, RLHF, or multimodal AI workflows
  • Experience evaluating prompts, responses, images, videos, or AI training datasets
  • QA/testing experience within AI, data operations, or content moderation environments
  • Experience with taxonomy creation, evaluation rubrics, or dataset quality initiatives
  • Python or scripting knowledge
  • Additional language proficiency is a plus (Korean, Japanese, Mandarin, Spanish, German, French, etc.)

Responsibilities

  • Perform annotation and labeling tasks for Chinese (zh-CN) generative AI datasets, including text, image, video, audio, and multimodal content
  • Review and evaluate AI-generated prompts and responses across a variety of topics and use cases
  • Conduct quality assurance checks to ensure accuracy, consistency, and compliance with annotation guidelines
  • Identify edge cases, inconsistencies, and quality issues in datasets and model outputs
  • Support data categorization, tagging, evaluation, and content review workflows for machine learning systems
  • Assist in the creation and refinement of annotation guidelines and evaluation frameworks
  • Collaborate with cross-functional teams to improve operational processes and annotation quality
  • Provide feedback on tools, workflows, and annotation methodologies

Benefits

  • In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification form upon hire. In addition, we employ anti-fraud checks to ensure all candidates meet the requirements of the program.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service