About The Position

Salesforce is the #1 AI CRM, where humans with agents drive customer success together. Slack AI's mission is to transform how people work by making Slack an AI-powered operating system, tackling significant challenges like unlocking collective knowledge and reducing noise, while building a seamless, consumer-grade AI experience within users' existing workflows. The AI and ML Infrastructure team, part of Slack’s Core Infrastructure organization, is responsible for the foundational systems that enable machine learning and AI across the company. This team designs, builds, and operates reliable, scalable, and high-performance platforms that allow product and ML teams to develop, deploy, and operate AI-driven capabilities with confidence. They own shared infrastructure, services, and tooling that support the full ML lifecycle, including model training, deployment, inference, and monitoring. As Slack AI grows, the team is evolving from traditional ML deployments toward large-scale, highly distributed systems, involving deep architectural decisions around scalable model deployment strategies, real-time feature serving at very high throughput, GPU accelerated inference at message scale, and responsible training of models on sensitive data with strong privacy and safety requirements. The ML Infrastructure focus area specifically handles low-level systems for training and inference at scale, including architecting and maintaining distributed systems for model training, serving, and deployment using Kubernetes-based platforms, GPU infrastructure, and open-source ML stacks such as KubeRay and vLLM. This team delivers platform capabilities that improve the speed, reliability, and quality of ML development, including training pipelines, feature generation systems, and compute orchestration. The AI Platform focus area builds tooling and platform layers for AI development across Slack, creating developer-facing tools, SDKs, and workflows for efficient and safe AI integration into Slack features, supporting LLM efficiency and model transition initiatives through integrations with managed services across multiple cloud providers. We are looking for Software Engineers to join the ML Infrastructure focus area to help architect and operate the core systems that power AI at Slack. In this role, you will own foundational infrastructure for large-scale model training and inference, evolving it into a reliable, secure, and self-service platform used across the company. You will work at the intersection of distributed systems, GPU infrastructure, and modern ML stacks, solving complex scalability and reliability challenges. This role blends deep systems engineering with a strong understanding of the ML lifecycle, playing a critical part in shaping the long-term technical foundations of Slack’s AI capabilities.

Requirements

  • Significant professional experience in software engineering with a strong focus on infrastructure, backend systems, platform engineering, or MLOps
  • Deep experience building and operating distributed systems, including expert level knowledge of Kubernetes and container based platforms
  • Hands on experience with modern ML infrastructure and serving stacks such as Ray or KubeRay, vLLM, or similar training and inference orchestration frameworks
  • Experience working with GPU infrastructure, including performance optimization and operational management at scale
  • Strong experience with data infrastructure and orchestration technologies such as Airflow, Spark, or similar systems
  • Experience building and operating cloud native systems on public cloud platforms such as AWS, GCP, or Azure, including infrastructure as code
  • A demonstrated ability to drive technical direction for complex systems and balance short term delivery with long term architectural goals
  • Excellent written communication, as well as ability to thrive in an asynchronous and globally distributed infrastructure team.
  • A related technical degree required

Responsibilities

  • Design, build, and operate systems to train, serve, and deploy machine learning models at scale, with a focus on reliability, performance, and operational simplicity
  • Evolve GPU backed inference infrastructure to support high throughput, latency sensitive workloads, including large scale model serving
  • Architect and optimize distributed training and data processing systems using platforms such as Ray, Airflow, Spark, or similar technologies
  • Build and maintain Kubernetes based platforms and orchestration layers using tools such as KubeRay, vLLM, and internally developed services
  • Architect solutions that bridge legacy systems with modern technologies while maintaining monolithic application stability
  • Develop robust monitoring, observability, and alerting for production ML workloads to ensure operational excellence
  • Partner closely with AI Platform, ML modeling, security, and product engineering teams to design infrastructure that supports evolving AI use cases
  • Provide technical leadership through design reviews, mentorship, and by setting engineering standards and long term architectural direction for ML infrastructure
  • Author technical design and architecture documentation, and contribute thought leadership through engineering blog posts

Benefits

  • time off programs
  • medical
  • dental
  • vision
  • mental health support
  • paid parental leave
  • life and disability insurance
  • 401(k)
  • employee stock purchasing program
  • company bonus
  • equity
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service