Research Scientist, RL Training

Snorkel AIRedwood City, CA

About The Position

We're looking for a Research Scientist to work on reinforcement learning for training and aligning large language models. This is a foundational research role focused on one of the most consequential open data problems in AI: how to generate the data, reward signals, and training procedures that steer LLM behavior in reliable and generalizable directions — and a core capability that directly differentiates Snorkel's data-as-a-service offering. You'll work closely with Snorkel's research, engineering, and delivery teams to advance our RL data capabilities — translating research ideas into the preference datasets, reward models, and RL-ready corpora we produce for frontier AI labs, and contributing to a research agenda that is central to Snorkel's long-term differentiation as a provider of bespoke training data.

Nice To Haves

  • Deep expertise in reinforcement learning from human or AI feedback, reward modeling and credit attribution ideally with a clear perspective on what data makes these techniques work.
  • Experience training or fine-tuning 30B+ large language models at scale, including familiarity with distributed training infrastructure.
  • Strong proficiency in Python and ML frameworks, especially PyTorch and HuggingFace and hands-on experience with RL frameworks such as Verl and SkyRL.
  • Solid software engineering fundamentals — you can build research prototypes that others can run, extend, and integrate into data production workflows.
  • Familiarity with ML infrastructure and cloud platforms and tools (AWS, GCP, Kubernetes, Slurm, etc.); experience with large-scale RL training pipelines a strong plus.
  • Comfort operating in a high-iteration environment with open-ended research questions and shifting, customer-driven technical constraints.
  • Ph.D. in machine learning, reinforcement learning, or a related field strongly preferred; exceptional industry experience considered.

Responsibilities

  • Research and implement reinforcement learning techniques — including GRPO, RLHF, RLAIF, DPO, and reward modeling — and translate them into data products (preference datasets, reward signals, verifiable rewards) that customers can use to train and fine-tune large language models.
  • Design and build data pipelines that generate high-quality training signal for RL workflows, including AI-assisted data annotation and curation data pipelines to improve model generalization to unseen benchmarks .
  • Prototype and iterate on end-to-end RL training recipes that inform what data Snorkel ships as part of its data-as-a-service deliveries.
  • Work closely with research scientists, ML engineers, and delivery teams to translate RL research into customer-ready data products.
  • Stay current with the latest developments in large-scale muli-node LLM training, alignment research, and scalable RL methods (on complex environments such as Terminal-Bench), bringing relevant advances into Snorkel's data-as-a-service approach.
  • Contribute to Snorkel's research publications and internal knowledge base in RL and model training.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

Number of Employees

251-500 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service