Research Intern RL & Post-Training Systems, Turbo (Summer 2026)

Together AISan Francisco, CA
3d$58 - $63

About The Position

Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancements such as FlashAttention, Mamba, FlexGen, SWARM Parallelism, Mixture of Agents, and RedPajama. The Turbo Research team investigates how to make post-training and reinforcement learning for large language models efficient, scalable, and reliable. Our work sits at the intersection of RL algorithms, inference systems, and large-scale experimentation, where the cost and structure of inference dominate overall training efficiency and shape what learning algorithms are practical. As a research intern, you will study RL and post-training methods whose performance and scalability are tightly coupled to inference behavior, co-designing algorithms and systems rather than treating them independently. Projects aim to unlock new regimes of experimentation—larger models, longer rollouts, and more complex evaluations—by rethinking how inference, scheduling, and training interact.

Requirements

  • Are pursuing a PhD or MS in Computer Science, EE, or a related field (exceptional undergraduates considered).
  • Have research experience in one or more of: RL or post-training for large models (e.g., RLHF, RLAIF, GRPO, preference optimization). ML systems (inference engines, runtimes, distributed systems). Large-scale empirical ML research or evaluation.
  • Are comfortable with empirical research: Designing controlled experiments and ablations. Interpreting noisy results and drawing principled conclusions.
  • Can work across abstraction layers: Strong Python skills for experimentation. Willingness to modify inference or training systems (experience with C++, CUDA, or similar is a plus).
  • Care about research insight, not just benchmarks: You ask why methods work or fail under real system constraints. You think about how infrastructure assumptions shape algorithmic outcomes.

Nice To Haves

  • Prior research experience with foundation models or efficient machine learning
  • Publications at leading ML and NLP conferences (such as NeurIPS, ICML, ICLR, ACL, or EMNLP)
  • Understanding of model optimization techniques and hardware acceleration approaches
  • Contributions to open-source machine learning projects

Benefits

  • We offer competitive compensation, housing stipends, and other competitive benefits.
  • The estimated US hourly rate for this role is $58-63/hr.
  • Our hourly rates are determined by location, level and role.
  • Individual compensation will be determined by experience, skills, and job-related knowledge.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service