Applied AI Researcher, Post-Training

Distyl AISan Francisco, CA
75d

About The Position

Distyl AI develops AI native technologies for humans & AI to collaborate to power the operations of the Global Fortune 1000. In just 24 months, we’ve rapidly grown to partner with some of the world’s largest enterprises—including F100 telecom, healthcare, manufacturing, insurance, and retail companies—delivering multiple AI deployments with $100M+ impact. Our platform, Distillery, along with our team of AI Engineers, Researchers, and Strategists, is pioneering AI-native systems of work, solving the most complex, high-stakes challenges at scale. Distyl is founded and led by proven leaders from companies like Palantir, Apple, and top national laboratories. We work in deep partnership with OpenAI, jointly going-to-market at the largest enterprises and collaborating evaluating and testing the latest models. Backed by Lightspeed, Khosla, Coatue, industry leaders like Nat Friedman (former GitHub CEO), as well as board members of over 20+ F500s, Distyl is building the future of AI-powered enterprise operations.

Requirements

  • Deep understanding of post-training techniques including supervised fine-tuning, preference optimization (RLHF/DPO), LoRA/PEFT, and instruction-tuning pipelines.
  • Experience adapting frontier models, tuning or adapting LLMs/SLMs to specialized domains or behaviors.
  • Experience building intelligent systems using models rather than just training or fine-tuning them.
  • Proven track record of research results, including publications in top journals or notable work shared publicly.
  • Strong programming and data analysis skills to build prototypes and perform experiments.
  • Bias towards showing results rather than discussing theoretical ideas.

Responsibilities

  • Focus on adapting foundation models to real-world performance and alignment requirements.
  • Develop and evaluate techniques such as supervised fine-tuning, preference optimization (DPO, RLHF, RLAIF), and continual adaptation.
  • Investigate new methods for aligning large models with human and system-level objectives.
  • Explore trade-offs between generalization and specialization, data efficiency and robustness, capability and controllability.
  • Inform how Distyl leverages foundation models safely, effectively, and at scale across industries.

Benefits

  • Competitive salary and benefits package, including equity options.
  • Medical/dental/vision covered at 100% for you and your dependents.
  • 401K plan.
  • Commuter benefits and lunch provided in office.
  • Collaborative and intellectually stimulating environment.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service