MLOps Engineer

Knowmadics, IncWichita, KS
8dRemote

About The Position

The MLOps Engineer designs, builds, and operates scalable machine learning systems that transform spatial-temporal and sensor-derived data into reliable ML workflows. This role spans the full ML lifecycle ingest, normalization, and feature engineering pipelines through distributed training and evaluation to low-latency inference and operational integration. Working across data infrastructure and deployment environments, the engineer operationalizes experimental models into reproducible, observable, and scalable systems. They ensure ML pipelines, containerized workloads, and CI/CD processes are robust, automated, and designed for real-world operational demands. In close collaboration with data scientists, geophysicists, and cross-functional engineering teams, this role translates research-grade algorithms into resilient services. As part of a fast-moving, government-funded technology business, the MLOps Engineer operates with high ownership in a low-ceremony, applied research environment, bringing structure, repeatability, and best practices to mission-driven sensor analytics systems.

Requirements

  • 3+ years of experience in MLOps, ML Engineering, Data Engineering, or closely related roles building and running ML/data pipelines.
  • Strong Python data and ML stack experience, including tools such as Polars/Pandas, PyArrow, PySpark, NumPy/SciPy.
  • Experience integrating models built with frameworks such as PyTorch, TensorFlow, or Keras into scalable pipelines.
  • Demonstrated experience working with temporal data, ideally including sensor-derived signals.
  • Practical CI/CD experience for ML/data services using Git-based workflows.
  • Experience working in AWS or similar cloud environments.
  • Experience running containerized ML or data workloads in Kubernetes.
  • Experience collaborating closely with data scientists to integrate algorithms.
  • Eligible to obtain a U.S. Security Clearance – U.S. Citizenship required.

Nice To Haves

  • Direct hands-on experience with sensor datasets such as seismographic data, cellular sensor modalities, RF survey data, or GPS devices.
  • Experience deploying and scaling ML workloads in Kubernetes using KEDA or alternative event-driven autoscaling approaches.
  • Experience building event-driven or streaming pipelines e.g. Kafka, Spark, Flink, or Sedona feeding lakehouse-style open table formats e.g. Iceberg or Delta.
  • Experience with SQL query engines e.g. Trino, DuckDB, or Athena
  • Experience selecting and operating orchestration frameworks such as Airflow, Dask, Ray, or Spark for scalable ML workloads.
  • Strong PostgreSQL experience, ideally with TimescaleDB and/or PostGIS, integrating ML outputs into operational databases.
  • DevOps experience with Helm and GitOps tooling.
  • Background in defense, cybersecurity, space, or other mission-driven sensor analytics environments.

Responsibilities

  • Design, build, and operate scalable ML and data pipelines for spatial-temporal and sensor-driven datasets.
  • Operationalize data science algorithms into reliable, distributed ML workflows covering feature extraction, training, evaluation, inference, and model lifecycle management.
  • Implement and maintain containerized ML workloads in cloud-native environments.
  • Integrate model outputs into downstream serving systems and analytical platforms to support web-based applications and operational decision-making.
  • Develop and maintain CI/CD pipelines for ML and data services.
  • Collaborate closely with data scientists to operationalize experimental models into reproducible, observable, and scalable production systems.
  • Take ownership of MLOps practices within an applied research team, bringing structure, repeatability, and best practices to evolving environments.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service