AI Software Engineer

ZoomSeattle, WA
75d$143,000 - $312,800Hybrid

About The Position

The AI Infra team at Zoom is dedicated to building a world-class inference infrastructure that powers all of Zoom's AI services. Our mission is to deliver high efficiency, scalability, and cost optimization across a wide range of AI applications, including large language models (LLM), vision-language models (VLM), automatic speech recognition (ASR), and machine translation. We focus on creating a seamless collaboration between small and large models, ensuring cost-effective, privacy-preserving, and high-quality AI services for our customers. As an AI Software Engineer on Zoom's AI Infra team, you will design, optimize, and scale the runtimes and services that power our AI models. Your work will directly improve efficiency, reduce latency, and lower costs across Zoom's AI stack, ensuring reliable, high-performance AI experiences for millions of users.

Requirements

  • Track record of building scalable, reliable AI infrastructure under real-world production constraints.
  • Strong expertise in GPU programming and optimization (CUDA, kernel-level development).
  • Deep experience with transformer-based models and inference frameworks (vLLM, TensorRT-LLM, SGLang, ONNX Runtime).
  • Proficiency in Python and C++ (Java is a plus).
  • Hands-on experience with PyTorch (TorchCompile, graph-level optimization) and/or TensorFlow.
  • Knowledge of low-level hardware concepts (GPU memory hierarchy, caching, vectorization).
  • Familiarity with cloud platforms (AWS, GCP, Azure) and AI deployment tools (Docker, Kubernetes, MLflow).

Responsibilities

  • Develop and optimize AI runtimes for LLMs, ASR, and MT systems with a focus on performance and cost efficiency.
  • Apply GPU-level optimization techniques including CUDA, kernel fusion, and memory throughput improvements.
  • Implement inference optimizations such as TorchCompile, graph optimization, KV cache, and continuous batching.
  • Build scalable, highly available infrastructure services to support enterprise-grade AI workloads.
  • Optimize models for edge devices (laptops, PCs and mobile devices) as well as large-scale cloud deployments.
  • Continuously improve latency, throughput, and efficiency across serving pipelines.
  • Rapidly integrate and optimize new industry models to stay ahead in AI infrastructure.

Benefits

  • Variety of perks, benefits, and options to help employees maintain their physical, mental, emotional, and financial health.
  • Support for work-life balance.
  • Opportunities to contribute to the community in meaningful ways.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service