Machine Learning Operations Engineer

ModulateSomerville, MA
Hybrid

About The Position

Modulate is the leader in conversational voice intelligence. We enable enterprises to deeply understand how people communicate and take timely action based on those insights. Our products help detect harm, prevent fraud, and build safer, more trusted online and real-world voice environments. We are building a Conversation Intelligence Platform — APIs, workflows, and applications that bring voice understanding to customers at enterprise scale. We’re looking for a Machine Learning Operations Engineer to own and scale the production inference systems behind Modulate’s machine learning models. This role will focus on ensuring high availability, reliability, and efficiency of deployed models across our APIs and enterprise products as we rapidly grow in customer usage and model demand.

Requirements

  • Experience deploying and maintaining production software systems
  • Experience building monitoring and alerting systems for production environments
  • Experience with on-call rotations and incident response
  • Strong experience with AWS, Python, and Linux
  • Exposure to PyTorch or similar ML frameworks
  • Experience working with GPU-based applications and basic GPU tooling (drivers, runtime, monitoring)
  • Strong debugging and systems thinking skills
  • Ability to operate calmly in production incident environments

Nice To Haves

  • Experience with ML model serving systems or dedicated model servers
  • Experience monitoring GPU performance for inference workloads
  • Experience optimizing machine learning model inference
  • Familiarity with audio or multimedia data (codecs, streaming, real-time systems)
  • Experience with infrastructure-as-code (e.g., Terraform, CloudFormation)

Responsibilities

  • Own the reliability and performance of ML model inference systems in production
  • Ensure high availability of deployed models across APIs and enterprise products
  • Build systems to handle scaling, load variability, and production traffic growth
  • Reduce operational burden through better tooling, automation, and processes
  • Help define how Modulate runs ML systems at scale with reliability and efficiency
  • Deploy, monitor, and maintain production machine learning inference systems
  • Oversee fleets of inference machines and ensure system health and performance
  • Design monitoring, alerting, and incident response systems for ML workloads
  • Participate in on-call rotations and lead incident response and debugging
  • Build systems and processes for scaling inference infrastructure under variable load
  • Improve reliability and observability of production ML services
  • Collaborate on infrastructure-as-code for production deployments
  • Support or contribute to GPU-based training and inference infrastructure
  • Work closely with ML and engineering teams to ensure smooth model deployments
  • Optimize model inference performance and latency

Benefits

  • Competitive salary + equity
  • Full health, dental, and vision coverage
  • Flexible PTO with strong culture of taking it
  • Weekly team lunches with dietary accommodations
  • Hybrid work with core in-office days and flexible remote options
  • Leadership and technical learning sessions
  • Career development and continued learning support
  • Up to 8 weeks work-from-anywhere policy
  • A deeply inclusive, human-centered culture
  • HSA, FSA, 15 holidays, professional growth resources
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service