About The Position

At Databricks, we are obsessed with enabling data teams to solve the world’s toughest problems, from security threat detection to cancer drug development. We do this by building and running the world’s best data and AI platform so our customers can focus on the high-value challenges that are central to their own missions. The Databricks AI Research organization enables companies to develop AI models and agents using their own data, with technologies ranging from post-training open source LLMs to developing advanced multi-agent architectures. Databricks AI does so by producing novel science and putting it into production. Databricks AI is committed to the belief that a company’s AI models and agents are just as valuable as any other core IP, and that high-quality AI should be available to all. As a Staff Software Engineer, AI Research Infrastructure, you will be developing and running the research stack that powers Databricks AI Research. You will design and build services that schedule, orchestrate, and observe large‑scale training and inference experiment workloads across thousands of GPUs, improve our dev tooling and ensure that researchers can iterate quickly without sacrificing reliability, efficiency, or security. You’ll partner closely with research scientists, ML engineers, and platform teams to turn experimental workloads into robust, repeatable pipelines, and to push the limits of what our infrastructure can support.

Requirements

  • BS/MS or PhD in Computer Science or related field
  • 5+ years of software engineering experience, including substantial time working on large‑scale distributed systems or infrastructure.
  • Have deep experience with building and operating distributed systems, data pipelines, or large‑scale backend services, ideally involving GPUs, clusters, or major cloud providers.
  • Are proficient in one or more systems programming languages (e.g., C++, Rust, Go, Java, Scala) and can design, implement, and debug complex services.
  • Have built or significantly contributed to cluster schedulers, resource managers, or large‑scale job orchestration systems (e.g., Kubernetes, Slurm, Ray, custom internal systems).
  • Understand modern ML training and inference workflows (e.g., distributed training, model parallelism, fine‑tuning, evaluation), even if you’re not primarily a research scientist.
  • Can move fast and be pragmatic in getting things done, while caring about operational excellence. Have driven complex systems from prototype to stable, well‑owned services.
  • Communicate clearly with both researchers and engineers, and enjoy translating between research needs and infra realities.

Responsibilities

  • Design and implement infrastructure that supports large‑scale experiments, data processing, and model training (e.g., HPC clusters, GPU fleets, or cloud‑based systems)
  • Enable researchers to go from idea to large‑scale experiment in minutes, not days, by building powerful abstractions for job submission, scheduling, and monitoring.
  • Create tooling that improves research developer productivity, such as experiment management systems, CI/testing infrastructure for research code, and workflows that reduce iteration time.
  • Influence the long‑term roadmap for research computation, shaping how Databricks AI Research train, evaluate, and ship models to customers.
  • Serve as a technical mentor and force multiplier for other engineers working on compute, infra, and AI systems.

Benefits

  • Eligibility for annual performance bonus
  • Equity
  • Benefits listed above
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service