Member of Technical Staff, Pre-Training (Copy)

InceptionSan Francisco, CA
9h

About The Position

We seek experienced scientists and engineers with deep expertise in post-training large language models through reinforcement learning. You will design and implement RL training pipelines for our diffusion LLMs, develop reward modeling strategies, and build the algorithms that align model behavior with human intent at scale.

Requirements

  • BS/MS/PhD in Computer Science or a related field (or equivalent experience).
  • At least 2 years of experience working on ML projects in PyTorch (or equivalent), preferably in a research lab or engineering role.
  • Excellent familiarity with transformers and core LLM concepts (autoregressive pretraining, instruction tuning, in-context learning, KV caching).
  • Hands-on experience with reinforcement learning from human feedback (RLHF), PPO, DPO, or related post-training methods.
  • Familiarity with training and inference in diffusion models.
  • Experience training deep learning models at scale in distributed computing environments.

Nice To Haves

  • Extensive experience training transformer-based language models from scratch.
  • Experience designing and implementing reward models or preference learning systems.
  • Knowledge of advanced training techniques (mixed precision, gradient accumulation, etc.).
  • Background in optimization theory and neural network architecture design.
  • Experience with LLM serving frameworks like vLLM, SGLang, or TensorRT.

Responsibilities

  • Design, develop, and optimize RL training pipelines (PPO, DPO, RLHF, and novel approaches) for diffusion-based LLMs.
  • Build and iterate on reward models, reward shaping strategies, and evaluation of reward quality.
  • Implement innovative approaches for fine-tuning and scaling generative AI models.
  • Work on data preprocessing pipelines, model evaluation, and alignment to enterprise use cases.
  • Research and implement techniques for controlled text generation and constraint satisfaction.
  • Improve training stability, efficiency, and reproducibility of RL workloads.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service