Research, Mid-Training

CognitionSan Francisco, CA

About The Position

We are an applied AI lab building end-to-end software agents. We're the team behind Devin, the first AI software engineer, and Windsurf, an AI-native IDE. These products represent our vision for AI that doesn't just assist engineers, but works alongside them as a genuine teammate. Our team is small and talent-dense: world-class competitive programmers, former founders, and researchers from the frontier of AI, including Scale AI, Palantir, Cursor, Google DeepMind, and others. Mid-training sits at the seam between pre-training and post-training and is one of the highest-leverage points in the entire model pipeline. This is where raw base model capability is sharpened into something that can reason deeply, generalize reliably, and serve as the foundation that post-training builds on. You will own the late-stage training decisions that determine what our models are fundamentally capable of: data mix and quality uplift, annealing schedules, context length extension, capability injection across coding, math, and reasoning, and the synthetic data strategies that make all of it scale. This role does cross-cutting work across what is classically considered both pre-training and post-training. We don't distinguish between research and engineering; we expect both.

Requirements

  • Deep familiarity with the LLM training pipeline end to end: pre-training data, optimization, architecture, and how mid-training and post-training interact
  • Hands-on experience with continual pre-training, annealing, or late-stage data mixing for large models
  • Strong intuition for data quality: what makes a dataset useful for training, how to filter and curate at scale, and how data mix choices compound across evals
  • Experience developing or evaluating synthetic data pipelines for capability improvement
  • Proficiency in Python and deep learning frameworks (PyTorch, JAX); comfortable debugging distributed training at scale
  • Strong fundamentals in optimization, statistics, and ML theory; able to distinguish real effects from noise, instability, and overfitting
  • A track record of original contributions: publications, open-source impact, or internal results that moved a capability frontier
  • Comfort operating in ambiguous, fast-moving environments where the problem definition is as important as the solution

Responsibilities

  • Data Mix and Quality Uplift: Design and iterate on high-quality data mixtures for late-stage and annealing training runs. Develop principled methods for sourcing, filtering, and weighting data to sharpen model capabilities without degrading general performance.
  • Capability Injection: Drive targeted improvements in coding, mathematics, and long-horizon reasoning through curated data strategies and training interventions. Translate research insights into measurable capability gains on our agents.
  • Synthetic Data Research: Develop and evaluate synthetic data pipelines that generate training signal at scale. Understand the limits and failure modes of synthetic approaches and build methods that hold up in production training runs.
  • Annealing and Schedule Design: Research and optimize multi-stage learning rate schedules, warmup strategies, and compute allocation across training phases. Understand how schedule choices interact with data distribution and model behavior.
  • Context Length Extension: Research and implement methods for extending effective context length without degrading short-context performance. This includes positional encoding strategies, data construction, and targeted evaluation.
  • Evaluation and Iteration: Build evals that distinguish real capability improvements from benchmark overfitting. Close the loop between training decisions and what actually matters for Devin and our other systems in deployment.
  • Scaling and Methodology: Measure how mid-training interventions scale with compute and data. Develop new approaches when existing methods hit ceilings; we expect both rigorous empiricism and original thinking.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service