AI Inference Engineer - Speech

ZoomSeattle, WA
12hHybrid

About The Position

We are looking for an AI Inference Engineer with a solid background in speech recognition and model inference. In this role, you will develop state-of-the-art automatic speech recognition system and ship it to various Zoom products. You will work on the most cutting edge speech modeling and inference technologies with world-class speech scientists. This role will include collaboration with cross-functional teams, including product, science engineering teams, and infrastructure teams, to deliver high-impact projects from the ground up. About the Team Zoom's AI Speech Team is developing speech recognition technologies to improve Zoom's conversational AI experience. This work impacts various products, like Zoom AI Companion, Zoom Meetings and Workplace, Zoom Contact Center, Zoom Phone, Zoom Revenue Accelerator, etc. Our team's mission is to equip the powerful AI brain with human-level listening and understanding undefined for voice input. As an AI Inference Engineer, you will develop novel speech model inference solutions on modern AI inference hardware, such as GPU, TPU and AI-specific chips. Our goal is to deliver the most unique AI-powered collaboration platform to users across the globe.

Requirements

  • Possess a Master's in Computer Science, Electrical Engineering or related fields with 3+ years of experience in speech recognition, speech-llm or AI model inference.
  • Display knowledge in deep learning and hands-on programming skills in Python, shell scripts, C/C++; familiarity with ML frameworks such as PyTorch and TensorFlow.
  • Demonstrate deep understanding of transformer encoder-decoder frameworks for speech recognition, including attention mechanisms, beam search and sequence-to-sequence modeling for end-to-end ASR systems.
  • Understand recent advancements in speech foundation models and speech-LLMs that integrate acoustic and linguistic representations, enabling unified modeling for speech understanding and transcription tasks.
  • Have experience in optimizing deep learning model inference on NVIDIA GPUs, including profiling and accelerating AI models using CUDA, TensorRT, and mixed-precision computation to achieve low latency, high-throughput performance.
  • Have experience developing and tuning custom CUDA kernels, leveraging CUDA Graphs for efficient execution scheduling, and minimizing kernel launch overhead to maximize GPU utilization.
  • Be proficient in end-to-end performance analysis, memory optimization, and deployment of largescale ML models on GPU clusters.
  • Experienced with stream management, asynchronous execution, and integrating frameworks such as PyTorch and TensorFlow for real-time inference.

Responsibilities

  • Developing state-of-the-art speech services for Zoom products.
  • Devising novel techniques where off-the-shelf solutions are not available.
  • Optimizing ASR inference systems for production deployment, including inference latency, throughput, memory footprint, and resource utilization.
  • Optimizing model inference performance by diving deep into the lower stack of inference frameworks, with a focus on hardware-specific optimizations for Nvidia GPUs.
  • Proposing new model structures by joint optimization of model accuracy and inference speed.
  • Designing and developing ASR systems with low latency and high accuracy requirements, while ensuring scalability of GPU infrastructure and improving throughput of ASR service.
  • Profiling and debugging ASR runtime performance bottlenecks across different deployment hardware and environments.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service