Sr SDE, AGI Inference- GenAI

Amazon.com, Inc.Sunnyvale, CA
39d

About The Position

The Sensory Inference team at AGI is a group of innovative developers working on ground-breaking multi-modal inference solutions that revolutionize how AI systems perceive and interact with the world. We push the limits of inference performance to provide the best possible experience for our users across a wide range of applications and devices. We are looking for talented, passionate, and dedicated Inference Engineers to join our team and build innovative, mission-critical, high-volume production systems that will shape the future of AI. You will have an enormous opportunity to make an impact on the design, architecture, and implementation of novel technologies used every day, potentially by people you know. This role offers the exciting chance to work in a highly technical domain at the boundary between fundamental AI research and production engineering such as Quantization, Speculative Decoding, and Long Context for inference efficiency.

Requirements

  • 5+ years of non-internship professional software development experience
  • 5+ years of programming with at least one software programming language experience
  • 5+ years of leading design or architecture (design patterns, reliability and scaling) of new and existing systems experience
  • Experience as a mentor, tech lead or leading an engineering team
  • Experience with inference frameworks such as PyTorch, TensorFlow, ONNXRuntime, TensorRT, LLaMA.cpp, etc.

Nice To Haves

  • 5+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience
  • Experience with inference frameworks such as PyTorch, TensorFlow, ONNXRuntime, TensorRT, LLaMA.cpp
  • Proficiency in performance optimization for CPU, GPU, or AI hardware
  • Proficiency in kernel programming for accelerated hardware using programming models such as (but not limited to) CUDA, OpenMP, OpenCL, Vulkan, and Metal
  • Experience with latency-sensitive optimizations and real-time inference
  • Knowledge of model compression techniques (quantization, pruning, distillation, etc.)
  • Experience with LLM efficiency techniques like speculative decoding and long context

Responsibilities

  • Develop high-performance inference software for a diverse set of neural models, typically in C/C++
  • Design, prototype, and evaluate new inference engines and optimization techniques
  • Participate in deep-dive analysis and profiling of production code
  • Optimize inference performance across various platforms (on-device, cloud-based CPU, GPU, proprietary ASICs)
  • Collaborate closely with research scientists to bring next-generation neural models to life
  • Partner with internal and external hardware teams to maximize platform utilization
  • Work in an Agile environment to deliver high-quality software against tight schedules
  • Hold a high bar for technical excellence within the team and across the organization

Benefits

  • medical
  • financial
  • other benefits

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

General Merchandise Retailers

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service