About The Position

You’ll be working on our data team focused on the quality of the datasets being delivered for training our models. This is a hands-on role where your #1 mission would be to improve the quality of the pretraining datasets by leveraging your previous experience, intuition and training experiments. This role particularly focuses on generating synthetic data at scale and determining the best strategies to leverage such data into training large models. You’ll closely collaborate with other teams like Pretraining, Postraining, Evals, and Product to define high-quality data needs that map to missing model capabilities and downstream use cases. Staying in sync with the latest research in synthetic data generation and pretraining is key to success in this role. You will constantly lead original research initiatives through short, time-bounded experiments while deploying highly technical engineering solutions into production. With the volumes of data to process being massive, you'll have a performant distributed data pipeline together with a large GPU cluster at your disposal.

Requirements

  • Strong machine learning and engineering background
  • Experience with Large Language Models (LLM), including:
  • Understanding of how LLMs learn
  • Data ablations and scaling laws
  • Post-training techniques
  • Training reasoning and agentic models
  • Experience with implementing cost-efficient, complex pipelines to generate synthetical datasets at scale optimizing for data quality, correctness, diversity, etc.
  • Experience with evals tracking model capabilities (general knowledge, reasoning, math, coding, long-context, etc)
  • Experience in building trillion-scale pretraining datasets, and familiarity with concepts like data curation, deduplication, data mixing, tokenization, curriculum, impact of data repetition, etc.
  • Excellent programming skills in Python
  • Strong prompt engineering skills
  • Experience working with large-scale GPU clusters and distributed data pipelines
  • Strong obsession with data quality

Nice To Haves

  • Author of scientific papers on any of the topics: applied deep learning, LLMs, source code generation, etc. - is a nice to have
  • Can freely discuss the latest papers and descend to fine details
  • Is reasonably opinionated

Responsibilities

  • Follow the latest research related to LLMs and synthetic data generation in particular. Be familiar with the most relevant open-source datasets and models.
  • Design and implement complex pipelines that can generate large amounts of data while maintaining high diversity and optimizing the resources available.
  • Closely work with other teams such as Pretraining, Posttraining, Evals and Product to ensure alignment on the quality of the models delivered.
  • Continuously measure and refine the quality of the datasets being generated while validating the final data strategy through quantitative data ablation experiments.

Benefits

  • Fully remote work & flexible hours
  • 37 days/year of vacation & holidays
  • Health insurance allowance for you and dependents
  • Company-provided equipment
  • Wellbeing, always-be-learning and home office allowances
  • Frequent team get togethers
  • Great diverse & inclusive people-first culture

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

101-250 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service