About The Position

The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Research Engineer with a strong hands-on machine learning background, to support the development of industry-leading multimodal large-language foundational models (LLMs). As a Research Engineer with the AGI team, you will be responsible for supporting the development of novel algorithms and modeling techniques to advance the state of the art. Your work will directly impact our customers and will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (Gen AI). You will have significant influence on our overall strategy by working at the intersection of engineering and applied science to scale pre and post-training workflows and build efficient models. You will help drive the system architecture, and spearhead the best practices that enable a quality infrastructure. The ideal candidate is clearly passionate about new opportunities and has a track record of success in delivering new features and products. A commitment to team work, hustle, and strong communication skills (to both business and technical partners) are absolute requirements. Creating reliable, scalable, and high performance products requires exceptional technical expertise, a sound understanding of the fundamentals of Computer Science, and practical experience building large-scale distributed systems. This person has thrived and succeeded in delivering high quality technology products/services in a hyper-growth environment where priorities shift fast. About the team The AGI Foundations team is responsible for building industry leading Generative AI Foundational Models.

Requirements

  • Bachelor's degree or foreign equivalent in Computer Science, Engineering, Mathematics, or a related field
  • Experience programming with at least one software programming language
  • PhD or Master's degree in machine learning or equivalent
  • 2+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience
  • Hands-on experience and expertise in training Foundational Models/LLMs, and/or low-level optimization of ML training workflows, CUDA kernels, network I/O.

Responsibilities

  • Responsible for pre and post-training multimodal LLMs.
  • Scale training of models on hyper large GPU and AWS Trainium clusters
  • Optimize training workflows using distributed training/parallelism techniques
  • Optimize low-level details of the training stack, including CUDA kernels, communication collectives, network I/O.
  • Utilize, build and extend upon industry leading frameworks (NeMo, Megatron Core, PyTorch, Jax, vLLM, TRT, etc)
  • Work with other team members to investigate design approaches, prototype new technology, scientific techniques and evaluate technical feasibility
  • Deliver results independently in a self organizing Agile environment while constantly embracing and adapting new scientific advances

Benefits

  • Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit https://www.aboutamazon.com/workplace/employee-benefits .
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service