At Databricks, we are passionate about enabling data teams to solve the world's toughest problems - from making the next mode of transportation a reality to accelerating the development of medical breakthroughs. We do this by building and running the world's best data and AI infrastructure platform so our customers can use deep data insights to improve their business. Foundation Model Serving is the API Product for hosting and serving frontier AI model inference for open source models like Llama, Qwen, and GPT OSS as well as proprietary models like Claude and OpenAI GPT. For this role, no prior ML or AI experience is necessary. We're looking for engineers who have owned high scale operational sensitive systems like customer facing APIs, Edge Gateways, ML Inference, or similar services and have an interest in getting deep building LLM APIs and runtimes at scale. As a Staff Engineer, you'll play a critical role in shaping both the product experience and core infrastructure. You will design and build systems that enable high-throughput, low-latency inference on GPU workloads with frontier models, influence architectural direction, and collaborate closely across platform, product, infrastructure, and research teams to deliver a world-class foundation model API product.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Industry
Professional, Scientific, and Technical Services
Education Level
No Education Listed
Number of Employees
5,001-10,000 employees