Annapurna Labs designs silicon and software that accelerates innovation. Customers choose us to create cloud solutions that solve challenges that were unimaginable a short time ago—even yesterday. Our custom chips, accelerators, and software stacks enable us to take on technical challenges that have never been seen before, and deliver results that help our customers change the world. AWS Neuron is the complete software stack for the AWS Trainium and Inferentia cloud-scale machine learning accelerators and the Trn3/Trn2/Trn1 and Inf2/Inf1 servers that use them. This role is for a software engineer in the Distributed Training team for AWS Neuron. This role is responsible for development, enablement and performance tuning of a wide variety of ML model families, including massive scale multi-modal large language models like Llama, Qwen, gpt-oss, DeepSeek and beyond, as well as multi-modal generation models such as Stable Diffusion, Flux, WAN, and many more. The Distributed Training team works side by side with chip architects, compiler engineers and runtime engineers to create, build and tune distributed training solutions with AWS Trainium, maximize training throughput, minimize time-to-convergence, and push the boundaries of training efficiency on Trainium. You will identify and resolve performance bottlenecks across the stack, from collective communications and memory utilization to compiler optimizations and kernel performance.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level