Member of Technical Staff - Site Reliability Engineer

MicrosoftRedmond, WA
88d$117,200 - $229,200

About The Position

As Microsoft continues to push the boundaries of AI, we are on the lookout for passionate individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is bold and broad — to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all — consumers, businesses, developers — so that everyone can realize its benefits. We’re looking for an experienced Site Reliability Engineer (SRE) to join our infrastructure team. In this role, you’ll blend software engineering and systems engineering to keep our large-scale distributed AI infrastructure reliable and efficient. You’ll work closely with ML researchers, data engineers, and product developers to design and operate the platforms that power training, fine-tuning, and serving generative AI models. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Requirements

  • 4+ years of experience in Site Reliability Engineering, DevOps, or Infrastructure Engineering roles.
  • Strong proficiency in Kubernetes, Docker, and container orchestration.
  • Knowledge of CI/CD pipelines for Inference and ML model deployment.
  • Hands-on experience with public cloud platforms like Azure/AWS/GCP and infrastructure-as-code.
  • Expertise in monitoring & observability tools (Grafana, Datadog, OpenTelemetry, etc.).
  • Strong programming/scripting skills in Python, Go, or Bash.
  • Solid knowledge of distributed systems, networking, and storage.
  • Experience running large-scale GPU clusters for ML/AI workloads (preferred).

Nice To Haves

  • Familiarity with ML training/inference pipelines.
  • Experience with high-performance computing (HPC) and workload schedulers (Kubernetes operators).
  • Background in capacity planning & cost optimization for GPU-heavy environments.

Responsibilities

  • Ensure uptime, resiliency, and fault tolerance of AI model training and inference systems.
  • Design and maintain monitoring, alerting, and logging systems to provide real-time visibility into model serving pipelines and infra.
  • Analyze system performance and scalability, optimize resource utilization (compute, GPU clusters, storage, networking).
  • Build automation for deployments, incident response, scaling, and failover in hybrid cloud/on-prem CPU+GPU environments.
  • Lead on-call rotations, troubleshoot production issues, conduct blameless postmortems, and drive continuous improvements.
  • Ensure data privacy, compliance, and secure operations across model training and serving environments.
  • Partner with ML engineers and platform teams to improve developer experience and accelerate research-to-production workflows.

Benefits

  • Competitive compensation, equity options, and comprehensive benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service