Amazon-posted 5 days ago
Full-time • Mid Level
Seattle, WA
5,001-10,000 employees

The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon’s custom machine learning accelerators, Inferentia and Trainium. The AWS Neuron SDK, developed by the Annapurna Labs team at AWS, is the backbone for accelerating deep learning and GenAI workloads on Amazon's Inferentia and Trainium ML accelerators. This comprehensive toolkit includes an ML compiler, runtime, and application framework that seamlessly integrates with popular ML frameworks like PyTorch and JAX enabling unparalleled ML inference and training performance. The Inference Enablement and Acceleration team is at the forefront of running a wide range of models and supporting novel architecture alongside maximizing their performance for AWS's custom ML accelerators. Working across the stack from PyTorch till the hardware-software boundary, our engineers build systematic infrastructure, innovate new methods and create high-performance kernels for ML functions, ensuring every compute unit is fine tuned for optimal performance for our customers' demanding workloads. We combine deep hardware knowledge with ML expertise to push the boundaries of what's possible in AI acceleration. As part of the broader Neuron organization, our team works across multiple technology layers - from frameworks and kernels and collaborate with compiler to runtime and collectives. We not only optimize current performance but also contribute to future architecture designs, working closely with customers to enable their models and ensure optimal performance. This role offers a unique opportunity to work at the intersection of machine learning, high-performance computing, and distributed architectures, where you'll help shape the future of AI acceleration technology You will architect and implement business critical features, and mentor a brilliant team of experienced engineers. We operate in spaces that are very large, yet our teams remain small and agile. There is no blueprint. We're inventing. We're experimenting. It is a very unique learning culture. The team works closely with customers on their model enablement, providing direct support and optimization expertise to ensure their machine learning workloads achieve optimal performance on AWS ML accelerators. The team collaborates with open source ecosystems to provide seamless integration and bring peak performance at scale for customers and developers. This role is responsible for development, enablement and performance tuning of a wide variety of LLM model families, including massive scale large language models like the Llama family, DeepSeek and beyond. The Inference Enablement and Acceleration team works side by side with compiler engineers and runtime engineers to create, build and tune distributed inference solutions with Trainium and Inferentia. Experience optimizing inference performance for both latency and throughput on such large models across the stack from system level optimizations through to Pytorch or JAX is a must have.

  • Design, develop, and optimize machine learning models and frameworks for deployment on custom ML hardware accelerators.
  • Participate in all stages of the ML system development lifecycle including distributed computing based architecture design, implementation, performance profiling, hardware-specific optimizations, testing and production deployment.
  • Build infrastructure to systematically analyze and onboard multiple models with diverse architecture.
  • Design and implement high-performance kernels and features for ML operations, leveraging the Neuron architecture and programming models
  • Analyze and optimize system-level performance across multiple generations of Neuron hardware
  • Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks
  • Implement optimizations such as fusion, sharding, tiling, and scheduling
  • Conduct comprehensive testing, including unit and end-to-end model testing with continuous deployment and releases through pipelines.
  • Work directly with customers to enable and optimize their ML models on AWS accelerators
  • Collaborate across teams to develop innovative optimization techniques
  • 3+ years of non-internship professional software development experience
  • Bachelor's degree in computer science or equivalent
  • 3+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience
  • Fundamentals of Machine learning and LLMs, their architecture, training and inference lifecycles along with work experience on optimizations for improving the model execution.
  • Software development experience in C++, Python (experience in at least one language is required).
  • Strong understanding of system performance, memory management, and parallel computing principles.
  • Proficiency in debugging, profiling, and implementing best software engineering practices in large-scale systems.
  • Familiarity with PyTorch, JIT compilation, and AOT tracing.
  • Familiarity with CUDA kernels or equivalent ML or low-level kernels.
  • Deep understanding of computer architecture, operation systems level software and working knowledge of parallel computing.
  • Candidates with performant kernel development such as CUTLASS, FlashInfer etc., would be well suited.
  • Familiar with syntax and tile-level semantics similar to Triton.
  • Experience with online/offline inference serving with vLLM, SGLang, TensorRT or similar platforms in production environments.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service