About The Position

The AI Frameworks team at Microsoft accelerates and optimizes large language model deployment on Microsoft's MAIA AI accelerators and GPUs. We build software across the stack, from PyTorch and inference systems such as vLLM and SGLang to performance-critical runtime and kernel components. Our team operates at the intersection of AI algorithmic innovation, purpose-built AI hardware, systems, and software, with a highly collaborative and inclusive culture. We are seeking a self-motivated Senior Software Engineer - AI Frameworks who thrives on technical innovation, enjoys diving deep into technical details, and adapts quickly in a fast-moving environment. This is a unique opportunity to directly shape the software that powers Microsoft's most advanced AI infrastructure—from custom silicon to the models running on it. Microsoft's mission is to empower every person and every organization on the planet to achieve more. As employees, we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Requirements

  • Bachelor's Degree in Computer Science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, or Python OR equivalent experience.

Nice To Haves

  • Master's Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, or Python OR Bachelor's Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, or Python OR equivalent experience.
  • Experience with PyTorch internals, custom operators, hardware backend, or torch.compile/Dynamo-based optimization flows.
  • Experience with AI inference stacks such as vLLM, SGLang, or similar large-scale model serving systems.
  • Experience with NPU or GPU kernel development and optimization (e.g., CUDA, Triton, or accelerator-specific toolchains).
  • Familiarity with common LLM concepts such as attention mechanisms, KV caching, quantization (PTQ/QAT), and distributed parallelism strategies (TP, PP, DP).

Responsibilities

  • Architect and implement efficient tensor computation primitives and software abstractions for custom AI accelerators.
  • Develop and extend PyTorch features for model onboarding, optimization, and execution on custom AI accelerators.
  • Contribute to and improve AI inference stacks such as vLLM and SGLang, including scheduling, KV cache management, and serving pipelines.
  • Design, develop, profile, and optimize high-performance kernels for NPUs (MAIA) and GPUs to accelerate LLM inference and training workloads.
  • Collaborate across disciplines to define requirements and
  • Deliver practical solutions to new technical challenges.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service