You’ll be working on our data team focused on the quality of the datasets being delivered for training our models. This is a hands-on role where your #1 mission would be to improve the quality of the pretraining datasets by leveraging your previous experience, intuition and training experiments. This role particularly focuses on generating synthetic data at scale and determining the best strategies to leverage such data into training large models. You’ll closely collaborate with other teams like Pretraining, Postraining, Evals, and Product to define high-quality data needs that map to missing model capabilities and downstream use cases. Staying in sync with the latest research in synthetic data generation and pretraining is key to success in this role. You will constantly lead original research initiatives through short, time-bounded experiments while deploying highly technical engineering solutions into production. With the volumes of data to process being massive, you'll have a performant distributed data pipeline together with a large GPU cluster at your disposal.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed
Number of Employees
101-250 employees