About The Position

AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machine learning accelerators and the Trn2 and future Trn3 servers that use them. This role is for a software engineer in the Machine Learning Applications (ML Apps) team for AWS Neuron. This role develops, enables and performance tunes building blocks for all key ML model families, including Llama3, GPT OSS, Qwen3, DeepSeek and beyond. The Neuron Inference Technology team works side by side with the Inference Model Enablement, compiler runtime engineers to create, build and tune high-performance distributed inference solutions for the latest generation Trainium accelerators. Experience optimizing LLM inference performance with kernels, Python, PyTorch or JAX is a must. This team develops optimized building blocks for the Neuron distributed inference library, tuning them to ensure highest performance and maximize efficiency running on Trn2 and Trn3 servers. As you develop technology components, you’ll create metrics, implement automation and other improvements, and resolve the root cause of software defects. You’ll also participate in design discussions, code review, and communicate with internal and external stakeholders. You will work cross-functionally with teams across Neufon in a fast-paced startup-like development environment, where we constantly stay on top of the latest priorities as the AI landscape evolves. Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future.

Requirements

  • 3+ years of non-internship professional software development experience
  • 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience
  • Experience programming with at least one software programming language

Nice To Haves

  • 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience
  • Bachelor's degree in computer science or equivalent
  • Experience optimizing LLM inference performance with kernels, Python, PyTorch or JAX

Responsibilities

  • develops optimized building blocks for the Neuron distributed inference library
  • tuning them to ensure highest performance and maximize efficiency running on Trn2 and Trn3 servers
  • create metrics
  • implement automation and other improvements
  • resolve the root cause of software defects
  • participate in design discussions, code review, and communicate with internal and external stakeholders
  • work cross-functionally with teams across Neufon

Benefits

  • Our compensation reflects the cost of labor across several US geographic markets.
  • The base pay for this position ranges from $129,300/year in our lowest geographic market up to $223,600/year in our highest geographic market.
  • Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience.
  • Amazon is a total compensation company.
  • Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits.
  • For more information, please visit https://www.aboutamazon.com/workplace/employee-benefits
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service