Staff Software Engineer - AI Research Infrastructure

DatabricksSan Francisco, CA
$199,000 - $270,000

About The Position

As a Staff Software Engineer, AI Research Infrastructure, you will be developing and running the research stack that powers Databricks AI Research. You will design and build services that schedule, orchestrate, and observe large‑scale training and inference experiment workloads across thousands of GPUs, improve our dev tooling and ensure that researchers can iterate quickly without sacrificing reliability, efficiency, or security. You’ll partner closely with research scientists, ML engineers, and platform teams to turn experimental workloads into robust, repeatable pipelines, and to push the limits of what our infrastructure can support.

Requirements

  • BS/MS or PhD in Computer Science or related field
  • 5+ years of software engineering experience, including substantial time working on large‑scale distributed systems or infrastructure.
  • Have deep experience with building and operating distributed systems, data pipelines, or large‑scale backend services, ideally involving GPUs, clusters, or major cloud providers.
  • Are proficient in one or more systems programming languages (e.g., C++, Rust, Go, Java, Scala) and can design, implement, and debug complex services.
  • Have built or significantly contributed to cluster schedulers, resource managers, or large‑scale job orchestration systems (e.g., Kubernetes, Slurm, Ray, custom internal systems).
  • Understand modern ML training and inference workflows (e.g., distributed training, model parallelism, fine‑tuning, evaluation), even if you’re not primarily a research scientist.
  • Can move fast and be pragmatic in getting things done, while caring about operational excellence. Have driven complex systems from prototype to stable, well‑owned services.
  • Communicate clearly with both researchers and engineers, and enjoy translating between research needs and infra realities.

Responsibilities

  • Design and implement infrastructure that supports large‑scale experiments, data processing, and model training (e.g., HPC clusters, GPU fleets, or cloud‑based systems)
  • Enable researchers to go from idea to large‑scale experiment in minutes, not days, by building powerful abstractions for job submission, scheduling, and monitoring.
  • Create tooling that improves research developer productivity, such as experiment management systems, CI/testing infrastructure for research code, and workflows that reduce iteration time.
  • Influence the long‑term roadmap for research computation, shaping how Databricks AI Research train, evaluate, and ship models to customers.
  • Serve as a technical mentor and force multiplier for other engineers working on compute, infra, and AI systems.

Benefits

  • Eligibility for annual performance bonus
  • Equity
  • Comprehensive benefits and perks
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service