The Alignment Training team studies how frontier models acquire durable behavioral tendencies across the training stack. We work on identifying which behaviors can be shaped through pre-training, mid-training, and post-training; building the data, objectives, and evaluations needed to influence them; and determining whether the resulting behavior reflects a general learned tendency or a narrow artifact of the training distribution. Our work spans synthetic data, pre-training, mid-training, post-training, model behavior, and evaluation. We study how models learn to interpret intent, follow instructions, reason through tasks, express uncertainty, act honestly, and remain reliable under new conditions. The goal is to make desirable tendencies emerge early, strengthen throughout training, and appear robustly in deployed systems. We’re looking for a senior researcher with exceptional technical depth in large-scale model training, synthetic data, or evaluation who is excited to study how training choices shape aligned behavior in frontier models. You will help shape the research agenda for alignment training: defining the behaviors we want models to learn, designing data and training interventions to teach them, and building the evaluation loops needed to tell whether those behaviors are broad, robust, and durable. The strongest candidates will be able to move from an ambiguous behavioral question to a concrete experimental program: formulate the hypothesis, design the intervention, build the pipeline, run the experiment, and decide whether the result is real. This role is especially well suited for someone who wants to work close to the core model training loop, where choices about data, objectives, and evaluation directly shape how aligned deployed systems are.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Education Level
No Education Listed