Magic’s mission is to build safe AGI that accelerates humanity’s progress on the world’s most important problems. We believe the most promising path to safe AGI lies in automating research and code generation to improve models and solve alignment more reliably than humans can alone. Our approach combines frontier-scale pre-training, domain-specific RL, ultra-long context, and inference-time compute to achieve this goal. About the role As a Software Engineer on the Pre-training Systems team, you will design and operate the distributed infrastructure that trains Magic’s long-context models at scale. This role focuses on large-scale model training across massive GPU clusters. You will work at the boundary between deep learning and distributed systems, ensuring that training runs are performant, reliable, and reproducible under extreme scale. Magic’s long-context models create non-trivial systems challenges: sustained memory pressure, communication overhead across thousands of devices, long-running jobs that must survive failures, and efficient sequence packing under hardware constraints. You will own the systems that make large-scale pre-training stable and fast.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed