About The Position

Data is playing an increasingly crucial role at the frontier of AI innovation. Many of the most meaningful advances in recent years have come not from new architectures, but from better data. As a member of the Data Team, your mission is to ensure that the data used to train and evaluate our models meets a high bar for quality, reliability, and downstream impact. You will directly shape how our models perform on critical capabilities — agentic tool use, long-horizon reasoning and robust safety alignment. Working with world-class researchers on our post-training teams, you’ll help turn fuzzy notions of “good data” into concrete, measurable standards that scale across large data campaigns. We’re looking for engineers who combine strong engineering fundamentals with a deep curiosity about data quality and its impact on model behavior

Requirements

  • Strong engineering fundamentals with experience building data pipelines, QA systems, or evaluation workflows for post-training data and agentic environments
  • Detail-oriented with an analytical mindset, able to identify failure modes, inconsistencies, and subtle issues that affect data quality
  • Solid understanding of how data quality impacts training (SFT and RL) and evaluation, with the ability to translate quality concerns into concrete signals, decisions, and feedback
  • Experience designing and validating automated quality checks, including rule-based systems, statistical methods, or model-assisted approaches such as LLM-as-a-Judge
  • Comfortable working autonomously, owning problems end-to-end, and collaborating effectively with researchers, engineers, and operations partners
  • Proficiency in Python and building ML / LLM workflows. Must be comfortable debugging and writing scalable code
  • Experience working with large datasets and automated evaluation or quality-checking systems
  • Familiarity with how LLMs work and can describe how models are trained and evaluated
  • Excellent communication skills with the ability to clearly articulate complex technical concepts across teams

Responsibilities

  • Own upstream data quality for LLM post-training and evaluation by analyzing expert-developed datasets and operationalizing quality standards for reasoning, alignment, and agentic use cases
  • Partner closely with research and post-training teams to translate requirements into measurable quality signals, and provide actionable feedback to external data vendors
  • Design, validate, and scale automated QA methods, including LLM-as-a-Judge frameworks, to reliably measure data quality across large campaigns
  • Build reusable QA pipelines that reliably deliver high-quality data to post-training teams for model training and evaluation
  • Monitor and report on data quality over time, driving continuous iteration on quality standards, processes, and acceptance criteria

Benefits

  • Top-tier compensation: Salary and equity structured to recognize and retain the best talent globally.
  • Health & wellness: Comprehensive medical, dental, vision, life, and disability insurance.
  • Life & family: Fully paid parental leave for all new parents, including adoptive and surrogate journeys. Financial support for family planning.
  • Benefits & balance: paid time off when you need it, relocation support, and more perks that optimize your time.
  • Opportunities to connect with teammates: lunch and dinner are provided daily. We have regular off-sites and team celebrations.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

51-100 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service