Software Engineer, ML Platform Infrastructure

NuroMountain View, CA
$160,360 - $240,540

About The Position

Nuro is seeking a Software Engineer with expertise in large-scale infrastructure, workload orchestration, and data processing to join our ML Infrastructure team. In this role, you will focus on building and evolving the core platform that provides researchers and engineers with seamless access to compute and data resources. You will be responsible for executing the technical strategy for automated resource provisioning, high-performance workload scheduling, and efficient feature management to accelerate the Nuro Driver™ development lifecycle.

Requirements

  • 3+ years of professional experience in ML Infrastructure, Backend Platform Engineering, or Distributed Systems.
  • Deep familiarity with modern Infrastructure-as-Code and provisioning tools such as Terraform, Pulumi, or Crossplane.
  • Hands-on experience building or managing large-scale orchestrators for compute-heavy workloads (e.g., Kubernetes, KubeRay, Ray, Slurm, or Volcano).
  • Proficiency in at least one distributed processing framework, such as Apache Spark or Apache Beam, for large-scale data extraction and transformation.
  • Experience implementing or maintaining feature stores and caching layers (e.g., Feast, Hopsworks, or Redis-based custom caching).
  • A strong understanding of distributed systems, networking, and storage bottlenecks in the context of high-performance computing.

Nice To Haves

  • Active contributor to open-source projects in the MLOps or Cloud-Native ecosystem (e.g., CNCF, Ray, or Kubeflow communities).
  • Experience with high-performance storage systems (e.g., Lustre, Ceph, or specialized NVMe caching) for ML data loading.
  • Knowledge of cost-optimization strategies for large-scale GPU clusters in public clouds (AWS, GCP, or Azure).

Responsibilities

  • Resource Provisioning & IaC: Scaling automated infrastructure-as-code (IaC) pipelines to manage thousands of GPU/CPU nodes across diverse environments.
  • Intelligent Scheduling: Designing and optimizing workload orchestration to maximize hardware utilization, minimize job wait times, and handle massive-scale distributed training.
  • Data & ETL: Designing robust pipelines for the extraction and transformation of petabyte-scale sensor and telemetry data into ML-ready formats.
  • Feature Management: Implementing robust feature caching and storage solutions to reduce redundant computations and ensure low-latency access to pre-computed features.
  • Platform Abstraction: Contributing to a unified ML platform that abstracts complex cloud infrastructure for end-users.

Benefits

  • This position is also eligible for an annual performance bonus, equity, and a competitive benefits package.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

501-1,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service