Machine Learning Engineer

AdobeSan Jose, CA
Remote

About The Position

We are looking for a Machine Learning Data Engineer to join our Applied Science Data Frameworks team responsible for building the foundational infrastructure that powers large-scale multimodal AI training and inference. This role is ideal for someone with strong distributed systems and data engineering fundamentals who is eager to work in an ML-adjacent environment—contributing to training data loaders, distributed inference frameworks, feature enrichment pipelines, and dataset management systems that enable ML teams to train foundation models at petabyte scale. You'll work on high-impact projects involving distributed data loading for PyTorch training workloads, batch inference pipelines for feature enrichment, semantic search infrastructure for dataset discovery, and production-grade ML data pipelines that support generative AI model development. Your systems will process billions of images, videos, and multimodal content across large-scale GPU clusters. If you're excited about building distributed data frameworks, optimizing data pipelines at scale, and growing your expertise in ML infrastructure, we'd love to hear from you.

Requirements

  • 3–4 years of professional experience building and operating distributed systems or data infrastructure in production environments.
  • Solid understanding of distributed computing concepts and experience with frameworks like Apache Spark, Ray, Dask, or equivalent.
  • Familiarity with cloud platforms (AWS or Azure) and data platforms such as Databricks or Spark.
  • Proficiency in Python and strong software engineering fundamentals — system design, data structures, algorithms.
  • Familiarity with ML frameworks such as PyTorch or TensorFlow; hands-on ML experience is a plus but not required.
  • Basic familiarity with MLOps practices including CI/CD pipelines, containerization (Docker), and deployment automation.
  • Familiarity with batch inference architectures and large-scale data processing patterns is a plus.
  • Bachelor's degree in Computer Science, Engineering, or a related field; MS is a plus.
  • Strong communication skills and ability to collaborate across engineering and research teams.

Nice To Haves

  • Hands-on ML experience
  • MS degree in Computer Science, Engineering, or a related field

Responsibilities

  • Contribute to building and maintaining distributed training data loaders that handle multi-source data ingestion, temporal sampling, and real-time transformations for large-scale model training workflows.
  • Help implement and maintain feature enrichment pipelines and dataset registry systems that support multimodal model training across images, video, documents, and text.
  • Build and maintain batch inference pipelines for large-scale feature extraction, processing assets through distributed GPU clusters with queue management and fault tolerance.
  • Develop data processing systems using frameworks like Apache Ray, Spark, DuckDB, or similar distributed computing tools for SQL-based data ingestion and Apache Arrow-based storage formats.
  • Support semantic search capabilities and vector database infrastructure (e.g., OpenSearch, LanceDB) for dataset discovery and embedding-based retrieval.
  • Contribute to CI/CD infrastructure for ML systems including self-hosted runner management, Docker image builds, automated testing pipelines, and deployment automation.
  • Collaborate with ML research teams to translate training requirements into reliable, scalable data loading and preprocessing infrastructure.
  • Write reusable framework components, SDKs, and documentation to help accelerate platform adoption across modeling teams.
  • Optimize data pipeline performance across dimensions like startup latency, throughput, memory footprint, and GPU utilization.
  • Contribute to observability and reliability standards for production data systems supporting 24/7 training workloads.

Benefits

  • Comprehensive benefits programs
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service