About The Position

About Salesforce Salesforce is the #1 AI CRM, where humans with agents drive customer success together. Here, ambition meets action. Tech meets trust. And innovation isn’t a buzzword — it’s a way of life. The world of work as we know it is changing and we're looking for Trailblazers who are passionate about bettering business and the world through AI, driving innovation, and keeping Salesforce's core values at the heart of it all. Ready to level-up your career at the company leading workforce transformation in the agentic era? You’re in the right place! Agentforce is the future of AI, and you are the future of Salesforce. About Slack AI Slack AI's mission is to transform how people work by making Slack an AI-powered operating system. We're tackling significant challenges like unlocking collective knowledge and reducing noise, all while building a seamless, consumer-grade AI experience within users' existing workflows. Join us in shaping the future of work through AI. About the Team The AI and ML Infrastructure team is part of Slack’s Core Infrastructure organization and is responsible for the foundational systems that enable machine learning and AI across the company. The team designs, builds, and operates reliable, scalable, and high performance platforms that allow product and ML teams to develop, deploy, and operate AI driven capabilities with confidence. The team owns shared infrastructure, services, and tooling that support the full ML lifecycle, including model training, deployment, inference, and monitoring. As Slack AI continues to grow, the team is evolving from traditional ML deployments toward large scale, highly distributed systems. This work involves deep architectural decisions around scalable model deployment strategies, real time feature serving at very high throughput, GPU accelerated inference at message scale, and responsible training of models on sensitive data with strong privacy and safety requirements. Core Focus Areas ML Infrastructure - The ML Infrastructure focus area is responsible for the low level systems that power training and inference at scale. This includes architecting and maintaining distributed systems for model training, serving, and deployment using Kubernetes based platforms, GPU infrastructure, and open source ML stacks such as KubeRay and vLLM. The team delivers platform capabilities that improve the speed, reliability, and quality of ML development, including training pipelines, feature generation systems, and compute orchestration. AI Platform - The AI Platform focus area builds the tooling and platform layers that enable AI development across Slack. This includes creating developer facing tools, SDKs, and workflows that allow product teams to integrate AI into Slack features efficiently and safely. The platform supports LLM efficiency and model transition initiatives through integrations with managed services across multiple cloud providers acting as the connective layer between core infrastructure and product engineering teams. About the Role We are looking for a Senior or Staff Software Engineer to join the ML Infrastructure focus area and help architect and operate the core systems that power AI at Slack. In this role, you will own foundational infrastructure for large scale model training and inference, and evolve it into a reliable, secure, and self service platform used across the company. You will work at the intersection of distributed systems, GPU infrastructure, and modern ML stacks, solving complex scalability and reliability challenges. This role blends deep systems engineering with a strong understanding of the ML lifecycle, and plays a critical part in shaping the long term technical foundations of Slack’s AI capabilities.

Requirements

  • Significant professional experience in software engineering with a strong focus on infrastructure, backend systems, platform engineering, or MLOps
  • Deep experience building and operating distributed systems, including expert level knowledge of Kubernetes and container based platforms
  • Hands on experience with modern ML infrastructure and serving stacks such as Ray or KubeRay, vLLM, or similar training and inference orchestration frameworks
  • Experience working with GPU infrastructure, including performance optimization and operational management at scale
  • Strong experience with data infrastructure and orchestration technologies such as Airflow, Spark, or similar systems
  • Experience building and operating cloud native systems on public cloud platforms such as AWS, GCP, or Azure, including infrastructure as code
  • A demonstrated ability to drive technical direction for complex systems and balance short term delivery with long term architectural goals
  • Excellent written communication, as well as ability to thrive in an asynchronous and globally distributed infrastructure team.
  • A related technical degree required

Responsibilities

  • Design, build, and operate systems to train, serve, and deploy machine learning models at scale, with a focus on reliability, performance, and operational simplicity
  • Evolve GPU backed inference infrastructure to support high throughput, latency sensitive workloads, including large scale model serving
  • Architect and optimize distributed training and data processing systems using platforms such as Ray, Airflow, Spark, or similar technologies
  • Build and maintain Kubernetes based platforms and orchestration layers using tools such as KubeRay, vLLM, and internally developed services
  • Architect solutions that bridge legacy systems with modern technologies while maintaining monolithic application stability
  • Develop robust monitoring, observability, and alerting for production ML workloads to ensure operational excellence
  • Partner closely with AI Platform, ML modeling, security, and product engineering teams to design infrastructure that supports evolving AI use cases
  • Provide technical leadership through design reviews, mentorship, and by setting engineering standards and long term architectural direction for ML infrastructure
  • Author technical design and architecture documentation, and contribute thought leadership through engineering blog posts

Benefits

  • time off programs
  • medical
  • dental
  • vision
  • mental health support
  • paid parental leave
  • life and disability insurance
  • 401(k)
  • an employee stock purchasing program
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service