Software Engineer, ML Infrastructure

NuroMountain View, CA

About The Position

Nuro is seeking a Software Engineer with expertise in large-scale infrastructure, workload orchestration, and data processing to join our ML Infrastructure team. In this role, you will focus on building and evolving the core platform that provides researchers and engineers with seamless access to compute and data resources. You will be responsible for executing the technical strategy for automated resource provisioning, high-performance workload scheduling, and efficient feature management to accelerate the Nuro Driver™ development lifecycle. You will build the foundation that powers Nuro’s model development from experimentation to production.

Requirements

  • 3+ years of professional experience in ML Infrastructure, Backend Platform Engineering, or Distributed Systems.
  • Deep familiarity with modern Infrastructure-as-Code and provisioning tools such as Terraform, Pulumi, or Crossplane.
  • Hands-on experience building or managing large-scale orchestrators for compute-heavy workloads (e.g., Kubernetes, KubeRay, Ray, Slurm, or Volcano).
  • Proficiency in at least one distributed processing framework, such as Apache Spark or Apache Beam, for large-scale data extraction and transformation.
  • Experience implementing or maintaining feature stores and caching layers (e.g., Feast, Hopsworks, or Redis-based custom caching).
  • A strong understanding of distributed systems, networking, and storage bottlenecks in the context of high-performance computing.

Nice To Haves

  • Active contributor to open-source projects in the MLOps or Cloud-Native ecosystem (e.g., CNCF, Ray, or Kubeflow communities).
  • Experience with high-performance storage systems (e.g., Lustre, Ceph, or specialized NVMe caching) for ML data loading.
  • Knowledge of cost-optimization strategies for large-scale GPU clusters in public clouds (AWS, GCP, or Azure).

Responsibilities

  • Scaling automated infrastructure-as-code (IaC) pipelines to manage thousands of GPU/CPU nodes across diverse environments.
  • Designing and optimizing workload orchestration to maximize hardware utilization, minimize job wait times, and handle massive-scale distributed training.
  • Designing robust pipelines for the extraction and transformation of petabyte-scale sensor and telemetry data into ML-ready formats.
  • Implementing robust feature caching and storage solutions to reduce redundant computations and ensure low-latency access to pre-computed features.
  • Contributing to a unified ML platform that abstracts complex cloud infrastructure for end-users.

Benefits

  • annual performance bonus
  • equity
  • competitive benefits package
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service