About The Position

We’re building foundational large language model capabilities for Amazon Stores that combine general world knowledge with Amazon’s e-commerce domain expertise to create more intuitive, conversational, and personalized shopping experiences for our customers. We’re looking for pioneers who are passionate about technology, innovation, and customer experience, and who want to make a lasting impact in a rapidly evolving space. You’ll work alongside talented scientists and engineers to invent on behalf of customers and unlock the next generation of LLM-powered shopping experiences. If you’re excited about working at the intersection of large-scale ML systems, post-training and inference optimization, and customer-facing innovation, this is a unique opportunity to join a dynamic team shaping the future of AI at Amazon. Key job responsibilities In this role, you will leverage your engineering expertise to develop and optimize generative AI systems for shopping. On a day-to-day basis, you will:

Requirements

  • 5+ years of non-internship professional software development experience
  • 5+ years of programming with at least one software programming language experience
  • 5+ years of leading design or architecture (design patterns, reliability and scaling) of new and existing systems experience
  • Experience as a mentor, tech lead or leading an engineering team
  • Experience with one of the following areas: machine learning technologies, Reinforcement Learning, Deep Learning, Computer Vision, Natural Language Processing (NLP) or related applications
  • 5+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience
  • Bachelor's degree in computer science or equivalent
  • Experience with Machine Learning and Large Language Model fundamentals, including architecture, training/inference lifecycles, and optimization of model execution, or experience in computer architecture
  • Experience with CUDA kernels or ML/low-level kernels
  • Experience with vLLM, SGLang, TensorRT or similar platforms in production environments, or experience working with PyTorch or JAX software

Responsibilities

  • Design and optimize high-performance kernels, custom operators, and low-level acceleration techniques that maximize hardware utilization and reduce computational overhead for LLM training and inference.
  • Drive improvements in memory management, parallel computing, kernel fusion, attention optimization, and matrix multiplication efficiency to reduce latency and increase throughput at scale.
  • Partner closely with applied scientists, engineering teams and product managers to define requirements, support experimentation, and deliver production-ready systems.
  • Move quickly in ambiguous environments, make thoughtful short- and long-term trade-offs, and deliver incrementally across a wide range of technologies, from distributed data processing to ML infrastructure and kernel-level optimization.
  • Develop tooling to accelerate experimentation, improve observability, and generate insights across model quality, latency, throughput, and efficiency metrics.

Benefits

  • health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage)
  • 401(k) matching
  • paid time off
  • parental leave
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service