Member of Technical Staff, LLM Inference

Microsoft CorporationRedmond, WA
36dHybrid

About The Position

Our Inference team is responsible for building and maintaining the tools and systems that enable Microsoft AI researchers to run models easily and efficiently. Our work empowers researchers to run models in RL, synthetic data generation, evals, and more. We are joint stewards of one of the largest compute fleets in the world. The team is responsible for optimizing compute efficiency on our heterogeneous data centers as well as enabling cutting-edge research and production deployment. We are an applied research team that is embedded directly in Microsoft AI's research org to work as closely as possible with researchers. We are vertically integrated, owning everything from kernels to architecture co-design to distributed systems to profiling and testing tools. This role could be a great match for you if you: Understand modern generative AI architectures and how to optimize them for inference. Are familiar with the internals of open-source inference frameworks like vLLM and SGLang. Value clear communication, improving team processes, and being a supportive team player. Are results-oriented, have a bias toward action, and enjoy owning problems end-to-end. Have or can quickly gain familiarity with modern Python and its tooling, PyTorch, Nvidia GPU kernel programming and optimization, Infiniband, and NVLink. Our newly formed parent organization, Microsoft AI (MAI), is dedicated to advancing Copilot and other consumer AI products and research. The team is responsible for Copilot, Bing, Edge, and AI research. Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.

Requirements

  • Bachelor's Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.

Nice To Haves

  • Master's Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor's Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python
  • OR equivalent experience.
  • Experience with generative AI.
  • Experience with distributed computing.
  • Python and Python ecosystem (eg. uv, pybind/nanobind, FastAPI) expertise.
  • Experience with large scale production inference.
  • Experience with GPU kernel programming.
  • Experience benchmarking, profiling, and optimizing PyTorch generative AI models.
  • Experience with open source inference frameworks like vLLM and SGLang.
  • Working experience and conversant with the material in the JAX scaling book.

Responsibilities

  • Work alongside researchers and engineers to implement frontier AI research ideas.
  • Introduce new systems, tools, and techniques to improve model inference performance.
  • Build tools to help debug performance bottlenecks, numeric instabilities, and distributed systems issues.
  • Build tools and establish processes to enhance the team's collective productivity.
  • Find ways to overcome roadblocks and deliver your work to users quickly and iteratively.
  • Enjoy working in a fast-paced, design-driven product development cycle.
  • Embody our Culture and Values.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

Publishing Industries

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service