Research Engineer, Model Evaluations

AnthropicSan Francisco, NY
Hybrid

About The Position

We're looking for Research Engineers to build the evaluations that tell us — and the world — what Claude can actually do. Your work will turn ambiguous notions of "intelligence" into clear, defensible metrics that researchers, leadership, and the public can rely on. You'll design and implement evaluations across the full spectrum of Claude's capabilities and personality, and build the infrastructure that runs them reliably at scale. You'll partner closely with researchers throughout the lifecycle of a new capability — from defining what to measure, to running the eval against live training checkpoints, to interpreting the results. The goal is to make Anthropic the leader in extremely well-characterized AI systems, with performance that is exhaustively measured and validated across the tasks that matter.

Requirements

  • Strong Python programming skills, including production or research infrastructure
  • Experience building or operating distributed systems, data pipelines, or other infrastructure that needs to be reliable at scale
  • Clear written and verbal communication, especially when explaining technical results to non-specialists
  • Comfort operating in an on-call or production-support capacity when training runs are live
  • Care about the societal impacts of your work and an interest in steering powerful AI to be safe and beneficial

Nice To Haves

  • Hands-on experience using large language models such as Claude, including prompting, sampling, and scaffolding
  • Background in data visualization and a track record of building dashboards people actually trust and use
  • Experience developing robust evaluation metrics for language models
  • Experience with observability, monitoring, or experiment-tracking systems
  • Background in statistics and experimental design
  • Experience with large-scale dataset sourcing, curation, and processing
  • Experience running or supporting ML training infrastructure
  • A bias toward picking up slack and operating flexibly across team boundaries
  • Enjoy pair programming — we love to pair

Responsibilities

  • Design and run new evaluations of Claude's capabilities — reasoning, agentic behavior, knowledge, safety properties — and produce visualizations that make the results legible to researchers and decision-makers
  • Build and harden the distributed eval execution platform so hundreds of evals run reliably against checkpoints throughout production RL training runs
  • Own the dashboards researchers and leadership use to monitor model health during training, improving signal-to-noise, reducing latency, and making regressions impossible to miss
  • Debug anomalous eval results mid-training-run, determine whether the cause is a model change or an infrastructure issue, and communicate the answer clearly under time pressure
  • Improve the tooling, libraries, and workflows researchers use to implement and iterate on evaluations
  • Partner with research teams across the full lifecycle of a new capability — from defining what to measure to interpreting results as training progresses
  • Run experiments to characterize how prompting, sampling, and scaffolding choices affect results on internal and industry benchmarks
  • Communicate evaluations and their results to internal stakeholders and, where appropriate, external audiences

Benefits

  • competitive compensation
  • optional equity donation matching
  • generous vacation
  • parental leave
  • flexible working hours
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service