Senior ML Ops Engineer

Axiado CorporationSan Jose, CA
35d

About The Position

We are looking for a Senior MLOps Engineer to own and build the end-to-end machine learning lifecycle, with a special focus on secure, reliable deployment to edge devices. You are a systems thinker and a hands-on engineer. You will be responsible for everything from the initial data pipeline to the final on-device model verification. You will design our data-labeling feedback loops, build the CI/CD pipelines that convert and deploy models, and implement the monitoring systems that tell us how those models are actually performing in the wild-both in terms of speed and quality. This role is a unique blend of data engineering, DevOps, ML security, and performance optimization. You will be the engineer who ensures our models are not only fast but also trusted, secure, and continuously improving.

Requirements

  • 5+ years of experience in MLOps, DevOps, or Software Engineering with a focus on ML systems.
  • Proven experience building and managing the full MLOps lifecycle, from data ingestion to production monitoring.
  • Strong programming skills in Python and deep experience with ML frameworks (e.g., PyTorch, TensorFlow).
  • Demonstrable experience with model conversion and optimization for edge devices (e.g., using ONNX, TFLite, TensorRT, or Apache TVM).
  • Strong understanding of data engineering principles and experience with data-labeling strategies (HITL/Active Learning).
  • Excellent understanding of CI/CD principles and tools (e.g., Git, Docker, GitLab CI).

Nice To Haves

  • Hands-on experience with Kubernetes (K8s) for MLOps orchestration (e.g., Kubeflow, Argo Workflows).
  • Familiarity with GPU scheduling and virtualization platforms such as Run:AI.
  • Proficiency in managing MLOps infrastructure on at least one major cloud platform (AWS, GCP, Azure).
  • Experience with embedded systems security, cryptographic signing, or hardware security modules (HSMs).
  • Experience in C++ for deploying high-performance inference code.

Responsibilities

  • Data & Labeling Lifecycle Management:
  • Architect and implement scalable data processing pipelines for ingesting, validating, and versioning massive datasets (e.g., using DVC, Pachyderm, or custom S3/Artifactory solutions).
  • Design and build the infrastructure for our Human-in-the-Loop (HITL) and AI-in-the-Loop (Active Learning)data labeling systems. This includes creating the feedback loops that identify high-value data for re-labeling.
  • Conduct deep data analysis to identify data drift, dataset bias, and feature drift, ensuring the statistical integrity of our training and validation sets.
  • On-Device Model Monitoring:
  • Design and deploy lightweight, on-device telemetry agents to monitor inference quality and concept drift, not just operational metrics.
  • Implement statistical monitoring on model outputs (e.g., confidence distributions, output ranges) and create automated alerting systems to flag model degradation.
  • Build the backend dashboards (e.g., Grafana, custom dashboards) to aggregate and visualize on-device performance and quality metrics from a fleet of edge devices.
  • Model Conversion & Deployment (CI/CD for ML):
  • Build and maintain a robust CI/CD pipeline (e.g., GitLab CI, Jenkins, GitHub Actions) that automates model training, conversion, quantization (PTQ/QAT), and packaging.
  • Manage the model conversion process, translating models from PyTorch/TensorFlow into optimized formats (e.g., ONNX, TFLite) for our AI inference engine.
  • Orchestrate model deployment to edge devices, managing model versioning and enabling reliable Over-the-Air (OTA) updates.
  • On-Device Model Security & Verification:
  • Implement a robust model verification framework using cryptographic signatures to ensure entity verification(i.e., that the model running on-device is the one we deployed).
  • Design and apply security protocols (e.g., secure boot, model encryption) to prevent model injection attacks and unauthorized model tampering on the device.
  • Collaborate with firmware and hardware security teams to ensure our MLOps pipeline adheres to a hardware root of trust.
  • Performance Optimization:
  • Analyze and optimize ML model performance for our specific AI inference engine.
  • Apply graph-level optimizations (e.g., operator fusion, pruning) and OP-level optimizations (e.g., rewriting custom ops, leveraging hardware-specific data types) to maximize throughput and minimize latency.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

Publishing Industries

Education Level

No Education Listed

Number of Employees

101-250 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service