Magic’s mission is to build safe AGI that accelerates humanity’s progress on the world’s most important problems. We believe the most promising path to safe AGI lies in automating research and code generation to improve models and solve alignment more reliably than humans can alone. Our approach combines frontier-scale pre-training, domain-specific RL, ultra-long context, and inference-time compute to achieve this goal. About the role As a Software Engineer on the Pre-training Data team, you will design and operate the systems that define our model’s training corpus at scale. This role is focused on large-scale data acquisition, processing, filtering, mixture design, and ablation-driven iteration. You will work on the infrastructure and experimental loops that determine what data we train on — and therefore what the model learns. Magic’s long-context models introduce non-trivial data challenges: maintaining document structure and long-range coherence, designing sequence chunking and packing strategies, balancing mixture trade-offs, and ensuring data quality at internet scale. You will own systems that turn these questions into measurable training decisions. This role can evolve into broader ownership of corpus strategy, deeper involvement in training systems, or transition into ML systems work as you shape how data and model behavior interact at scale.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed