About The Position

Propio is on a mission to make communication accessible to everyone. As a leader in real-time interpretation and multilingual language services, we connect people with the information they need across language, culture, and modality. We're committed to building AI-powered tools to enhance interpreter workflows, automate multilingual insights, and scale communication quality across industries. The Machine Learning Operations Engineer will design, build, and maintain the production infrastructure required to deploy, scale, monitor, and govern Propio's ML and agentic AI systems. This role ensures that translation, speech, interpretation, and conversational AI models run reliably, securely, and cost-effectively in real-time environments. The MLOps Engineer bridges ML engineering, DevOps, and platform engineering—owning the end-to-end operational lifecycle from training pipelines to automated deployment to observability, aligning with HIPAA, SOC2, and HITRUST standards.

Requirements

  • 3+ years of experience in ML Olps, DevOps, or ML platform Engineering
  • Proficiency in Python, ML frameworks, and MLOps tools (MLflow, Kubeflow, SageMaker)
  • Strong experience in software engineering skills (CI/CD, version control, testing, debugging production systems)
  • Experience implementing monitoring and observability for ML systems (Datadog, Prometheus/Grafana, LangSmith, MLflow
  • Experience with cloud platforms (AWS, GCP, or Azure)

Nice To Haves

  • Experience with LLM APIs, prompt engineering, and agentic systems is a plus

Responsibilities

  • Model Deployment, Serving & Infrastructure Build and maintain scalable model serving infrastructure for real-time inference (translation, ASR/TTS, agentic AI workflows)
  • Implement automated CI/CD pipelines for ML models and LLM agents, including versioning, rollback strategies, and multi-environment promotion (dev ? staging ? prod).
  • Develop GPU/compute orchestration strategies for cost-efficient workloads across AWS (SageMaker, ECS/EKS, EC2, or Databricks).
  • Monitoring, Observability & Reliability Implement reproducible ML workflows with strong dependency management, data lineage, feature versioning, and reproducibility guarantees
  • Integrate observability platforms (Datadog, MLflow, LangSmith) for end-to-end tracing of agentic workflows and multi-step tool execution.
  • Build alerting systems and dashboards for both business-level metrics (quality, throughput) and engineering metrics (GPU load, memory, queue depth Data, Governance & Compliance
  • Ensure ML systems meet HIPAA, SOC2, and HITRUST standards, including encryption, audit logging, access controls, and secure handling of PHI.
  • Implement data validation, schema enforcement, and drift detection to guarantee data quality for both training and inference
  • Manage model registry, feature store, and lineage tracking across all AI services Collaboration & Cross-Functional Work
  • Work closely with Machine Learning Engineers to productionize models and agentic systems, ensuring seamless handoff from experimentation to deployment.
  • Collaborate with Data Engineering to operationalize data pipelines feeding ML/LLM workflows
  • Partner with DevOps, Security Engineering, and Platform Engineering to integrate ML systems into Propio's cloud stack Cost Efficiency & Scalability
  • Optimize model serving architectures for latency, concurrency, and cost.
  • Implement autoscaling, caching, routing, and load-balancing solutions for high-volume LLM and speech-based systems.
  • Evaluate and implement new technologies (vector databases, real-time streaming infra, model compression, quantization).
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service