About The Position

Baseten powers mission-critical inference for the world's most dynamic AI companies. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. The Voice AI team is focused on bringing state-of-the-art open source models into production for Voice AI customers across various industries. This is a high-impact, high-ownership role where the engineer will be the primary owner of Baseten Voice AI, our in-house inference stack for Voice AI models, from product roadmap through engineering implementation. The role involves partnering closely with Forward Deployed Engineers, Model Performance Engineers, and sister engineering teams to push the boundaries of Voice AI.

Requirements

  • Bachelor's degree or higher in Computer Science or related field
  • Proven track record owning production-grade real-time, large-scale systems where tail latency (p99) matters.
  • Proficient coding abilities in one or more popular programming or scripting languages; Python proficiency is a plus.
  • Good taste in product, particularly developer-oriented tools
  • Interest in ML/AI infrastructure and willingness to learn
  • Strong collaboration and communication skills
  • Comfortable using AI coding assistants (e.g., Claude Code, Codex, Cursor) as a daily productivity multiplier — as an AI-native company, we see this as a must-have skill.

Nice To Haves

  • Experience implementing pipeline-level model runtime optimizations such as dynamic batching, async scheduling, or decode-side throughput improvements.
  • Experience building developer platforms: SDKs, CLIs, APIs, and self-serve workflows for ML or infrastructure products.
  • Experience with containerization and orchestration technologies (Docker, Kubernetes), service meshes, or distributed scheduling.
  • Familiarity with speech/audio ML models (STT, TTS, speech-to-text)
  • Familiarity with model-serving runtimes (vLLM, TensorRT, ONNX).
  • Familiarity with systems-level performance profiling across host-device boundaries (e.g. PyTorch Profiler), diagnosing GPU utilization issues
  • Exposure to customer-facing engineering: pre-sales prototyping, technical discovery, or working directly with customers to ship solutions.

Responsibilities

  • Own and lead Voice AI product areas end-to-end — from architecture and system design through implementation, rollout, and long-term production operations.
  • Design, build, and operate real-time, large-scale, high-performance model serving systems for STT, TTS, and voice agent workloads with clear SLOs for mission-critical customer deployments.
  • Drive cross-team collaboration with sister engineering teams to solve full-stack technical problems, aligning on priorities, and coordinating end-to-end delivery across the product surface area.
  • Mentor teammates through code reviews, design docs, and technical leadership.

Benefits

  • Competitive compensation, including meaningful equity.
  • 100% coverage of medical, dental, and vision insurance for employee and dependents
  • Flexible PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!)
  • Paid parental leave
  • Fertility and family-building stipend through Carrot
  • Company-facilitated 401(k)
  • Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service