Research Scientist - Frontier Data

AfterQuerySan Francisco, CA
$250,000 - $450,000

About The Position

AfterQuery builds the training data and evaluation infrastructure that frontier AI labs use to make their models better. We work with the world's leading labs to design high signal datasets and run rigorous evaluations that go beyond static benchmarks. We are a small, early team (post Series A) where individual contributors have a direct impact on how the next generation of models learn and improve. The Role You'll design the datasets and evaluation frameworks that shape how frontier models are trained and measured. Working directly with research teams at top AI labs, you'll experiment with data collection strategies, diagnose model failure modes, and develop the metrics that determine whether a model is actually getting better. This is hands-on, high leverage work: you'll go from hypothesis to live experiment quickly, and your output will directly influence model training runs at scale.

Requirements

  • Great candidates are undergrad research or master's research (but haven't done a phd)
  • Genuine obsession with how data structure, selection, and quality drive model behavior
  • Ability to design lightweight experiments, move fast, and extract actionable insights from messy results
  • Comfort working across domains (you'll touch finance, software engineering, policy, and more)
  • Strong quantitative instincts and familiarity with LLM training pipelines, RLHF/RLVR, or evaluation methodology
  • A bias toward building over theorizing

Nice To Haves

  • Major plus if they've worked for/interned for any RL environment companies in the past or any AI safety or benchmarking orgs like METR, Artificial Analysis, etc..

Responsibilities

  • Design data slides and explore data shapes that expose meaningful model failure modes across domains like finance, code, and enterprise workflows
  • Build and refine evaluation rubrics and reward signals for RLHF and RLVR training pipelines
  • Model annotator behavior and run experiments to improve different model capabilities
  • Develop quantitative frameworks for measuring dataset quality, diversity, and downstream impact on model alignment and capability
  • Partner with lab research teams to translate their training objectives into concrete data and evaluation specifications
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service