Sr. AI Inference Systems Engineer

TencentPalo Alto, CA
Onsite

About The Position

This role involves leading the optimization of the full inference pipeline for Large Models (LLM, Multimodal), focusing on KV Cache storage strategies, Router architecture design, and collaborative operator optimization to maximize throughput and minimize latency. It also includes conducting in-depth research into heterogeneous computing, evaluating hardware accelerator suitability for various inference scenarios, and developing standardized optimization schemes. The engineer will design and implement high-performance inference frameworks, optimizing scheduling and memory management to address distributed inference challenges. A key aspect is tracking global advancements in inference technology and driving their productization, while also providing technical leadership to overcome bottlenecks, design technical roadmaps, and mentor team members.

Requirements

  • Master’s or Ph.D. in Computer Science, Electronic Engineering, AI, or related fields.
  • Significant professional experience in AI inference optimization or heterogeneous computing.
  • Proficient in at least one AI accelerator architecture; deep understanding of underlying principles, instruction sets, and hardware-specific tuning.
  • Mastery of core inference optimization techniques, including multi-level KV Cache management, Quantization, and Intelligent Routing.
  • Expert in parallel computing and distributed systems; deep understanding of low-level programming models (e.g., CUDA, Triton) and inference engine architectures.
  • Familiar with mainstream deep learning frameworks (e.g., PyTorch, TensorFlow).
  • Stay current with global evolutions in inference technology and computing architectures, with the ability to objectively evaluate different technical paths.
  • Strong analytical and cross-team collaboration skills, with a proven track record of leading complex inference projects to fruition.

Nice To Haves

  • Experience in optimizing ultra-large-scale models is highly preferred.
  • Experience in tuning ultra-large-scale inference clusters or driving AI inference productization.
  • High-level publications or core patents in relevant fields are a plus.

Responsibilities

  • End-to-End Inference Optimization: Lead the optimization of the full inference pipeline for Large Models (LLM, Multimodal); focus on KV Cache storage strategies, Router architecture design, and collaborative operator optimization to maximize throughput and minimize latency.
  • Heterogeneous Computing Research: Conduct in-depth research into the underlying inference logic of various hardware accelerators; evaluate architectural suitability for real-time, batch, and streaming inference scenarios to develop standardized optimization schemes.
  • Inference Framework & Toolchain: Design and implement high-performance inference frameworks; optimize scheduling and memory management to resolve long-tail issues such as communication latency and load imbalance in distributed inference.
  • Technological Innovation: Track global advancements in inference technology (e.g., compiler optimization, model compression, and hardware fusion); drive the productization of emerging technologies within production environments.
  • Technical Leadership: Lead efforts to overcome key technical bottlenecks in inference optimization; design technical roadmaps and mentor team members to build a robust AI inference technical ecosystem.

Benefits

  • Sign on payment
  • Relocation package
  • Restricted stock units
  • Medical benefits
  • Dental benefits
  • Vision benefits
  • Life benefits
  • Disability benefits
  • Participation in the Company’s 401(k) plan
  • Up to 15 to 25 days of vacation per year (depending on the employee’s tenure)
  • Up to 13 days of holidays throughout the calendar year
  • Up to 10 days of paid sick leave per year

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

Ph.D. or professional degree

Number of Employees

5,001-10,000 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service