About The Position

AWS Trainium is deployed at scale, with millions of chips in production, and has been used for training and inference of frontier models. AWS Neuron is the software stack for Trainium, enabling customers to run deep learning and generative AI workloads with optimal performance and cost efficiency. AWS Neuron is hiring a Technical Product Manager to work backward from Trainium customers and drive the developer experience for running high-performance ML workloads at scale on AWS Trainium, from getting started with Neuron Deep Learning Containers, AMIs, and AWS services to operating at scale through orchestration, resiliency, and observability. You will drive the product strategy for how developers interact with Trainium through container ecosystems, resource management platforms, and AWS services. This includes Neuron integration with orchestration tools (SLURM, Kubernetes), AWS services (EKS, SageMaker), Neuron Deep Learning Containers and AMIs, and Linux distribution support. You will also drive the strategy for resiliency and observability tools that enable system diagnostics, performance monitoring, health monitoring, automated recovery, and telemetry, allowing customers to operate AI training and inference workloads with maximum uptime and efficiency, as well as how Neuron Runtime System interacts with ML frameworks to ensure scale and high performance execution of models. To be successful in this role, you will partner with engineering teams and PMs responsible for training, inference, and performance tools, Marketing, Business Development, and Solution Architects supporting customers, and develop deep knowledge and understanding of Trainium Architecture and Neuron Runtime System (including Neuron Runtime Library, Neuron Kernel Driver and Collective Communication Stack) to effectively define product strategy and make informed technical decisions. The Ideal Candidate The ideal candidate can balance competing customer priorities and drive alignment across engineering and business stakeholders in a fast-moving, early-stage product environment, with excellent written and verbal communication abilities. About AWS Neuron AWS Neuron is the software stack for running deep learning and generative AI workloads on AWS Trainium and AWS Inferentia. It includes a compiler, runtime, training and inference libraries, and developer tools for monitoring, profiling, and debugging. Built on an open source foundation, Neuron supports native PyTorch and JAX frameworks and popular ML libraries without code modification. Neuron enables rapid experimentation, distributed training across multiple chips and nodes, and cost-optimized inference powered by optimized kernels. For performance optimization, Neuron provides the Neuron Kernel Interface (NKI) for direct hardware access and a suite of profiling and debugging tools.

Requirements

  • Bachelor's degree or above in computer science, engineering, analytics, mathematics, statistics, IT or equivalent
  • 7+ years of industry experience with at least 5+ years in Technical product management and 3+ years of software development
  • Experience with technical product management for developer-facing products
  • Experience with resource management and orchestration systems (such as SLURM and Kubernetes schedulers), including monitoring, observability, and resilience
  • Excellent written and verbal communication abilities

Nice To Haves

  • Technical product management for developer-facing runtime and infrastructure products
  • Developer tools (SDKs, libraries, APIs) with focus on developer experience
  • Resource management and orchestration systems (SLURM, Kubernetes schedulers)
  • ML monitoring, observability, and resilience
  • Distributed systems and high-performance computing (HPC) environments
  • AWS cloud services and infrastructure
  • Distributed computing and parallel processing, Collective communication libraries (NCCL, MPI) and communication patterns (all-reduce, all-gather, reduce-scatter)
  • Experience with Linux systems, kernel development, and device drivers
  • Deep learning model training and inference deployments at scale; Container orchestration (Kubernetes), computer architecture fundamentals
  • Developer tools (SDKs, libraries, APIs) with focus on developer experience
  • Track record of driving developer libraries, open standards, and ecosystem integration
  • High-performance networking technologies (RDMA, EFA)

Responsibilities

  • Product Strategy & Vision: Own product strategy and roadmap. Guide trade-offs between performance, scalability, and developer experience. Write PRFAQs and PRDs.
  • Customer Discovery: Understand deployment challenges, orchestration needs, and infrastructure pain points. Represent customer needs in executive prioritization.
  • Technical Leadership: Drive alignment across Neuron components (Runtime, Kernel Driver, Collective Communication, container infrastructure) and AWS services. Partner with training, inference, and performance PMs. Write user stories and define success metrics.
  • Impact: Enable customers (Anthropic, Databricks, AWS teams) to deploy, monitor, and operate ML workloads at scale through container orchestration, resource management, health monitoring, and observability.

Benefits

  • health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage)
  • 401(k) matching
  • paid time off
  • parental leave
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service