Research Engineer, Search and Knowledge Post-Training

AnthropicSan Francisco, NY
Hybrid

About The Position

About the role We want future AI systems to have superhuman epistemics: the ability to parse evidence at enormous scale and draw rigorous conclusions for both itself and the user. Search is the capability that determines whether a model can pick a signal out of noise, weigh conflicting evidence, and know what it doesn't know. Every higher-order capability we care about depends on search being trustworthy. If we want Claude to be a trustworthy collaborator on real knowledge work, it has to be a trustworthy searcher. We're hiring a Research Engineer to advance the science and engineering that goes into making Claude this trustworthy searcher. This is a research role for someone who is unusually rigorous: you'll define hypotheses about what makes a model an epistemically sound searcher, design the experiments that test them, and turn search post-training from a craft into a measurable science. You'll be the person who insists on cleanly isolated variables, calibrated metrics, and reproducible signal, while also having the engineering skill to build the infrastructure necessary to get them. This work sits at the intersection of reinforcement learning, retrieval, and evaluation, and it directly shapes how Claude behaves in any setting where evidence matters: research, analysis, agentic workflows, and beyond.

Requirements

  • Have an unusually rigorous, quantitative mindset
  • Are an outstanding software engineer in Python, comfortable across the stack from data pipelines to RL training to evaluation infrastructure
  • Have shipped real ML research repeatedly, with taste for which experiments are worth running. You instinctively reach for ablations, controls, and confidence intervals to understand why
  • Operate well with high autonomy and ambiguity and can identify the most impactful problem to work on next without being told
  • Want to set research direction, advocate for experimental rigor, and raise the bar for the people around you
  • Communicate research clearly in writing and in person; you can defend a design choice and update on evidence

Nice To Haves

  • Hands-on experience with RL on large language models — environments, reward design, training stability, scaling behavior.
  • Background in search, retrieval, RAG, or agents that reason over external information sources.
  • Experience building evaluations for open-ended or knowledge-intensive LLM behavior
  • Prior work in a research-heavy environment — frontier AI lab, quant research firm, or similarly demanding empirical setting — where rigor is the default.
  • Published research on LLMs, RL, retrieval, calibration, or related topics.
  • Experience with distributed training systems and large-scale experimentation infrastructure.

Responsibilities

  • Own a research direction for a class of search post-training problems end-to-end: form hypotheses about latent capabilities, design experiments that isolate them, run training, and decide what to try next.
  • Build the instrumentation that turns environment design into a controlled experiment so we can study how each environment factor contributes to the capabilities we care about, rather than overfitting to any one regime.
  • Design frontier-discriminating evaluations that distinguish genuine reasoning over evidence from plausible pattern matching and that hold up as models improve.
  • Drive optimization rigor across the stack: efficient experiment design, ablations, training run economics, and the discipline to know when a result is real.
  • Collaborate deeply with researchers across post-training, RL infrastructure, and product to translate model behavior in the wild into concrete training signals and back again.
  • Set the bar for the team's experimental standards — what we measure, how we measure it, how we know a result is real.

Benefits

  • competitive compensation
  • benefits
  • optional equity donation matching
  • generous vacation
  • parental leave
  • flexible working hours
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service