Researcher, Alignment Training

OpenAISan Francisco, CA

About The Position

The Alignment Training team studies how frontier models acquire durable behavioral tendencies across the training stack. We work on identifying which behaviors can be shaped through pre-training, mid-training, and post-training; building the data, objectives, and evaluations needed to influence them; and determining whether the resulting behavior reflects a general learned tendency or a narrow artifact of the training distribution. Our work spans synthetic data, pre-training, mid-training, post-training, model behavior, and evaluation. We study how models learn to interpret intent, follow instructions, reason through tasks, express uncertainty, act honestly, and remain reliable under new conditions. The goal is to make desirable tendencies emerge early, strengthen throughout training, and appear robustly in deployed systems. We’re looking for a senior researcher with exceptional technical depth in large-scale model training, synthetic data, or evaluation who is excited to study how training choices shape aligned behavior in frontier models. You will help shape the research agenda for alignment training: defining the behaviors we want models to learn, designing data and training interventions to teach them, and building the evaluation loops needed to tell whether those behaviors are broad, robust, and durable. The strongest candidates will be able to move from an ambiguous behavioral question to a concrete experimental program: formulate the hypothesis, design the intervention, build the pipeline, run the experiment, and decide whether the result is real. This role is especially well suited for someone who wants to work close to the core model training loop, where choices about data, objectives, and evaluation directly shape how aligned deployed systems are.

Requirements

  • Have a strong record of technically excellent work in large-scale ML, especially in pre-training, post-training, synthetic data, model evaluation, or training infrastructure.
  • Are comfortable designing experiments where the signal is subtle, noisy, or indirect.
  • Can move between research taste and engineering execution: forming hypotheses, building pipelines, running experiments, analyzing results, and turning findings into the next iteration.
  • Have unusually good judgment about which research questions are worth pursuing and which signals are strong enough to trust.
  • Care about making models more useful, honest, steerable, and reliable for real users.
  • Are excited by alignment problems, even if you have not worked in alignment before.
  • Communicate clearly across research, engineering, and product contexts.
  • Prefer practical, evidence-driven work grounded in experiments.

Responsibilities

  • Develop synthetic data methods that teach models higher-level behavioral tendencies, such as understanding user intent, following instructions reliably, reasoning clearly, being honest, and acting consistently with intended goals and constraints.
  • Study how pre-training, mid-training, and post-training each shape downstream model behavior, and which interventions are best applied at which stage.
  • Build evaluation loops that connect model behavior back to training data and training objectives, so the team can iterate faster and with clearer signal.
  • Design reusable data generation and filtering pipelines that improve the quality, diversity, and robustness of training data.
  • Create experiments that distinguish durable learned behavior from benchmark gains, distribution-specific effects, or evaluation artifacts.
  • Collaborate across pre-training, post-training, alignment, and product-facing teams to translate research insights into better model behavior.
  • Help define the research agenda for alignment training: which behaviors should remain invariant across settings, which should adapt, and how to measure whether models have learned an underlying principle rather than a surface pattern.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service