About The Position

At Databricks, we are obsessed with enabling data teams to solve the world’s toughest problems, from security threat detection to cancer drug development. We do this by building and running the world’s best data and AI platform so our customers can focus on the high-value challenges that are central to their own missions. The Databricks AI Research organization enables companies to develop AI models and agents using their own data, with technologies ranging from post-training open source LLMs to developing advanced multi-agent architectures. Databricks AI does so by producing novel science and putting it into production. Databricks AI is committed to the belief that a company’s AI models and agents are just as valuable as any other core IP, and that high-quality AI should be available to all. As a Sr. Research Scientist on the Scaling team, you will be responsible for keeping up with the latest developments in deep learning and advancing the scientific frontier by creating new techniques that go beyond the state of the art. You will work together on a collaborative team of researchers and engineers with diverse backgrounds and technical training. And most importantly, you will love our customers: our goal is to make our customers successful in applying state-of-the-art LLMs and AI systems, and we encode our scientific expertise into our products to make that possible.

Requirements

  • MS/PhD in Computer Science or related field with strong foundations in machine learning and systems
  • Proven ability to write high-quality, efficient code in Python and PyTorch for research implementation and experimentation

Nice To Haves

  • Strong preference for candidates with first-author publications at top ML/systems conferences (ICLR, ICML, NeurIPS, MLSys) focused on optimization or efficiency.

Responsibilities

  • Define and lead independent research agendas on foundation model efficiency in model training and reinforcement learning, conducting experiments to empirically validate hypotheses and benchmark against state-of-the-art approaches
  • Drive algorithmic innovations for large-scale neural network training or inference (e.g., novel optimizers, low-precision techniques, model adaptation methods)
  • Optimize ML systems for distributed training, memory efficiency, and compute efficiency through hands-on implementation.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service