Staff ETL Engineer

Unity TechnologiesWashington, DC
7d

About The Position

We are seeking a senior ML engineer to design and evolve the large-scale offline platform. This role focuses on building reliable infrastructure for generating training datasets, orchestrating ML workflows, and enabling efficient, distributed model training at scale. You will work closely with ML engineers and platform teams to ensure our pipelines can efficiently handle growing data volumes and increasingly complex training workloads. You will play a key role in shaping how model datasets are prepared as well as model training, validated, and delivered to distributed training systems, while ensuring the reliability, scalability, and performance of our offline ML platform.

Requirements

  • Strong experience building large-scale ML pipelines
  • Experience working with distributed computing frameworks such as Ray, Spark, Flink and familiarity in the Ray ecosystem (Ray Data, Ray Train) for distributed data processing and model training
  • Experience building infrastructure for training data generation, dataset preparation, or ML feature pipelines
  • Deep experience designing and operating production-grade data pipelines
  • Strong programming skills in Python and experience working with large-scale distributed workloads
  • Experience with modern data infrastructure (data lakes, warehouses, orchestration systems, streaming platforms)
  • Strong systems thinking, with the ability to reason about performance, scalability, reliability, and cost tradeoffs in distributed systems
  • Proven ability to lead technical direction and influence architectural decisions across teams without formal authority

Responsibilities

  • Design and operate large-scale data pipelines that generate training datasets used for machine learning training and experimentation
  • Develop infrastructure that supports distributed training workflows using technologies such as Pytorch, Ray Data, and Ray Train, etc.
  • Integrate ML pipelines with workflow orchestration systems (e.g., Flyte, Airflow, or similar) to enable reliable multi-stage training workflows
  • Improve reproducibility and observability of ML pipelines through dataset validation, monitoring, and automated testing
  • Optimize performance and resource utilization across distributed compute systems used for data processing and model training
  • Partner closely with ML engineers to enable efficient large-scale experimentation and model iteration
  • Lead architectural improvements to ensure our offline ML pipelines remain scalable, reliable, and cost-efficient

Benefits

  • Comprehensive health, life, and disability insurance
  • Commute subsidy
  • Employee stock ownership
  • Competitive retirement/pension plans
  • Generous vacation and personal days
  • Support for new parents through leave and family-care programs
  • Office food snacks
  • Mental Health and Wellbeing programs and support
  • Employee Resource Groups
  • Global Employee Assistance Program
  • Training and development programs
  • Volunteering and donation matching program
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service