About The Position

Together.ai is a leader in developing AI infrastructure that powers the training of state-of-the-art models. We focus on creating scalable, efficient systems for handling massive datasets and managing large-scale distributed checkpoints, ensuring seamless workflows for training and fine-tuning AI models. We are seeking a Training Dataset and Checkpoint Acceleration Engineer to optimize data pipelines and checkpoint mechanisms for large-scale machine learning workloads. In this role, you will work at the intersection of data engineering and distributed systems, ensuring that training workflows are highly performant, reliable, and cost-efficient.

Requirements

  • Experience: 5+ years of experience in data engineering, distributed systems, or ML infrastructure.
  • Technical Skills: Expertise in high-performance data processing libraries (e.g., PyTorch DataLoader, TensorFlow Data, DALI).
  • Proficiency in distributed storage systems and data formats (e.g., Parquet, HDF5).
  • Strong understanding of checkpointing frameworks and file systems (e.g., POSIX, Lustre, GPFS).
  • Programming: Proficient in Python, C++, or Go for performance-critical systems.
  • Optimization Techniques: Experience with I/O optimization techniques (e.g., asynchronous data loading, prefetching).
  • Familiarity with compression and serialization for large datasets and checkpoints.
  • Soft Skills: Analytical and problem-solving mindset.
  • Strong communication and collaboration skills across teams.

Nice To Haves

  • Experience with ML frameworks (e.g., PyTorch, TensorFlow, JAX) and distributed training.
  • Familiarity with hardware accelerators (e.g., GPUs, TPUs) and storage optimizations.
  • Knowledge of open-source contributions or projects related to data pipelines or checkpointing.
  • Experience with incremental and real-time checkpointing solutions.

Responsibilities

  • Dataset Acceleration: Design and optimize high-throughput data pipelines for streaming and processing massive training datasets.
  • Implement caching, sharding, and prefetching techniques to maximize data-loading efficiency.
  • Ensure efficient integration with distributed storage systems (e.g., S3, GCS, Lustre, Ceph).
  • Checkpointing Systems: Build and optimize distributed checkpoint mechanisms for large-scale training workflows.
  • Implement techniques to minimize checkpoint I/O overhead and ensure fault tolerance.
  • Develop incremental and differential checkpointing solutions to reduce storage costs.
  • Performance Optimization: Profile and debug bottlenecks in data pipelines and checkpoint systems.
  • Optimize for GPU/TPU utilization by ensuring efficient data feeding and checkpoint recovery times.
  • Scalability and Reliability: Develop systems that scale efficiently across thousands of nodes and petabyte-scale datasets.
  • Ensure fault-tolerant recovery and resume mechanisms for long-running training jobs.
  • Collaboration and Support: Work closely with ML researchers, data engineers, and infrastructure teams to understand workload requirements.
  • Build tools and frameworks to enable seamless integration of dataset and checkpointing systems with existing ML workflows.

Benefits

  • competitive compensation
  • startup equity
  • health insurance
  • other competitive benefits

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

101-250 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service