Our Inference team is responsible for building and maintaining the tools and systems that enable Microsoft AI researchers to run models easily and efficiently. Our work empowers researchers to run models in RL, synthetic data generation, evals, and more. We are joint stewards of one of the largest compute fleets in the world. The team is responsible for optimizing compute efficiency on our heterogeneous data centers as well as enabling cutting-edge research and production deployment. We are an applied research team that is embedded directly in Microsoft AI's research org to work as closely as possible with researchers. We are vertically integrated, owning everything from kernels to architecture co-design to distributed systems to profiling and testing tools. This role could be a great match for you if you: Understand modern generative AI architectures and how to optimize them for inference. Are familiar with the internals of open-source inference frameworks like vLLM and SGLang. Value clear communication, improving team processes, and being a supportive team player. Are results-oriented, have a bias toward action, and enjoy owning problems end-to-end. Have or can quickly gain familiarity with modern Python and its tooling, PyTorch, Nvidia GPU kernel programming and optimization, Infiniband, and NVLink. Our newly formed parent organization, Microsoft AI (MAI), is dedicated to advancing Copilot and other consumer AI products and research. The team is responsible for Copilot, Bing, Edge, and AI research. Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Industry
Publishing Industries
Number of Employees
5,001-10,000 employees