Senior Software Engineer - Infrastructure

BasetenSan Francisco, CA
8h

About The Position

Baseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products. As a Senior Infrastructure Software Engineer at Baseten, you'll architect and lead development of our ML inference platform that powers production AI applications. You'll make key technical decisions for the infrastructure enabling developers to deploy, scale, and monitor ML models with high performance and reliability. You'll get to work on these types of projects as part of our Infrastructure team: Multi-cloud capacity management Inference on B200 GPUs Multi-node inference Fractional H100 GPUs for efficient model serving

Requirements

  • Bachelor's degree or higher in Computer Science or related field
  • 5+ years experience building production infrastructure systems
  • Expert-level proficiency in Go, with Python experience a plus
  • Deep expertise with Kubernetes in production environments
  • Extensive experience with major cloud providers (AWS, GCP) and neo-cloud providers (Crusoe, DigitalOcean, Nebius) a plus.
  • Advanced understanding of distributed systems concepts and performance tuning
  • Proven experience designing observability systems
  • Track record of leading technical initiatives and mentoring engineers
  • Experience with ML/AI workloads and MLOps platforms highly valued

Responsibilities

  • Design and architect scalable infrastructure systems for our ML inference platform
  • Lead optimization of Kubernetes deployments for efficient, cost-effective model serving
  • Drive enhancements to our inference orchestration layer for complex model deployments
  • Define monitoring strategies for model performance, latency, and resource utilization
  • Develop advanced solutions for GPU capacity management and throughput optimization
  • Establish infrastructure automation standards to streamline ML deployment workflows
  • Partner with other engineers to translate complex inference requirements into technical solutions
  • Make critical architectural decisions balancing performance with system reliability
  • Lead technical discussions and mentor junior engineers on infrastructure best practices
  • Contribute to long-term technical strategy and infrastructure roadmap

Benefits

  • Competitive compensation, including meaningful equity.
  • 100% coverage of medical, dental, and vision insurance for employee and dependents
  • Generous PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!)
  • Paid parental leave
  • Company-facilitated 401(k)
  • Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service