About The Position

AWS Trainium is deployed at scale, with millions of chips in production, used for training and inference of frontier models. AWS Neuron is the software stack for Trainium, enabling customers to run deep learning and generative AI workloads with optimal performance and cost efficiency. AWS Neuron is hiring a Principal Technical Product Manager to define and drive product strategy for training software on Trainium. This includes distributed training libraries, post-training workflows (RLHF, DPO, fine-tuning), reinforcement learning frameworks, and training performance optimization. Your mission is to enable researchers and operators to train frontier models at scale on Trainium, from single-node experimentation to distributed training across thousands of nodes. You will be the champion inside AWS for frontier model builders pushing the bounds of scale and resilience for current and emerging training paradigms. You will work with customers inside and outside the company to identify key improvements and stay ahead of the training landscape. You will define how Neuron supports the training AI/ML ecosystem and what tools customers will use for their training workflows on Trainium. To be successful, you will partner with engineering teams building training libraries and distributed training infrastructure, applied scientists developing optimization techniques, and PMs responsible for compiler, runtime, NKI, and infrastructure. You will develop deep knowledge of AI/ML training architectures, distributed training systems, model parallelism strategies, and training performance optimization to effectively define product strategy and make informed technical decisions. The Ideal Candidate The ideal candidate will have solid understanding of large-scale model training, distributed training architectures, post-training workflows, and reinforcement learning. They should be able to assess technical implications of training software stack decisions, understand customer needs, and drive developer experience improvements. The ideal candidate can navigate ambiguity in a fast-moving, early-stage initiative, balance competing priorities across multiple workstreams, and drive alignment across engineering and science stakeholders with excellent written and verbal communication abilities

Requirements

  • 7+ years of working as a Technical Product Manager experience
  • Bachelor's degree in computer science, engineering, analytics, mathematics, statistics, IT or equivalent
  • Experience with large-scale model training workflows, including solid knowledge of distributed training concepts
  • Familiarity with major AI/ML training frameworks (JAX or PyTorch) and how training libraries interact with them
  • Experience driving product strategy, long-term roadmap development, and cross-organizational alignment
  • Excellent written and verbal communication abilities, including executive-level communication

Nice To Haves

  • Experience with PyTorch or JAX distributed training
  • Track record of driving developer training libraries and tools
  • Experience with design and scaling of training optimization software (e.g., NeMo, TorchTitan, TRL, VeRL, MaxText, AXLearn, or similar)
  • Experience leading RL for research-to-production at scale
  • Experience with post-training workflows including RLHF, DPO, reward modeling, and fine-tuning
  • Experience with AI/ML training accelerators and hardware, including training performance optimization, profiling, and tooling
  • Experience with distributed training of large-scale models including model parallel training techniques (tensor, pipeline, sequence, and expert parallelism)
  • Experience working on open source and GitHub-first developer products with deep customer interactions
  • Track record of driving open standards and AI/ML ecosystem integration for training workflows
  • Experience operating in early-stage, ambiguous environments with startup-like velocity

Responsibilities

  • Define and execute training product strategy and roadmap working backwards from customer requirements in collaboration with engineering leadership. Define the vision for how customers train frontier models at scale on Trainium, balancing performance, developer experience, and AI/ML ecosystem compatibility. Produce PRFAQs and PRDs for training capabilities. Drive technical alignment across Neuron training libraries, distributed training infrastructure, and dependencies. Partner with PMs responsible for compiler, NKI, runtime, and infrastructure. Drive trade-offs between training performance, scalability, developer experience, and AI/ML ecosystem compatibility. Define requirements for reusable training building blocks that compose into end-to-end workflows.
  • Drive strategy for post-training workflows including RLHF, DPO, reward modeling, and fine-tuning at scale. Define requirements for how Neuron supports emerging training paradigms, model architectures, and RL-based optimization loops. Lead the product experience for RL research-to-production workflows on Trainium. Create and optimize RL libraries and frameworks to help researchers and production model builders.
  • Work with BD, Solutions Architecture, and GTM teams to engage customers training frontier models on Trainium. Understand their distributed training challenges, RL needs, performance optimization requirements, and framework preferences. Translate customer pain points into product requirements. Define success metrics for training adoption and performance. Support customer enablement for training migration and optimization.
  • Define how Neuron supports the training AI/ML ecosystem and what tools customers will use for their training workflows on Trainium. Own the technical depth on training-specific AI/ML ecosystem tools and define how Neuron's training libraries integrate with them. Track training-specific AI/ML ecosystem trends and feed them into product planning. Drive open source community engagement and upstream contributions for training-related tools. Coordinate with BD on partnership discussions where training-specific technical input is needed.
  • Lead end-to-end launches for training capabilities, coordinating documentation, field enablement, and customer communications. Partner with Marketing and Solutions Architecture to drive awareness and adoption. Define launch success criteria and track adoption metrics.

Benefits

  • health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage)
  • 401(k) matching
  • paid time off
  • parental leave
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service