About The Position

Magic’s mission is to build safe AGI that accelerates humanity’s progress on the world’s most important problems. We believe the most promising path to safe AGI lies in automating research and code generation to improve models and solve alignment more reliably than humans can alone. Our approach combines frontier-scale pre-training, domain-specific RL, ultra-long context, and inference-time compute to achieve this goal. About the role As a Software Engineer on the Pre-training Systems team, you will design and operate the distributed infrastructure that trains Magic’s long-context models at scale. This role focuses on large-scale model training across massive GPU clusters. You will work at the boundary between deep learning and distributed systems, ensuring that training runs are performant, reliable, and reproducible under extreme scale. Magic’s long-context models create non-trivial systems challenges: sustained memory pressure, communication overhead across thousands of devices, long-running jobs that must survive failures, and efficient sequence packing under hardware constraints. You will own the systems that make large-scale pre-training stable and fast.

Requirements

  • Strong software engineering and distributed systems fundamentals
  • Experience training large models in multi-node GPU environments
  • Deep understanding of parallelism strategies and performance trade-offs
  • Experience debugging cross-layer issues in production ML systems
  • Strong ownership mindset and ability to operate critical infrastructure
  • Track record of improving performance or reliability of large-scale systems

Responsibilities

  • Scale distributed training across large GPU clusters (data, tensor, pipeline parallelism)
  • Optimize communication patterns and gradient synchronization
  • Improve checkpointing, fault tolerance, and job recovery systems
  • Profile and eliminate performance bottlenecks across compute, networking, and storage
  • Improve experiment reproducibility and orchestration workflows
  • Increase hardware utilization and training throughput
  • Collaborate with Kernels and Research to align model architecture with systems realities

Benefits

  • Equity is a significant part of total compensation, in addition to salary
  • 401(k) plan with 6% salary matching
  • Generous health, dental and vision insurance for you and your dependents
  • Unlimited paid time off
  • Visa sponsorship and relocation stipend to bring you to SF, if possible
  • A small, fast-paced, highly focused team
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service