Sr. AI Inference Systems Engineer

TencentPalo Alto, CA
Onsite

About The Position

Tencent is a world-leading internet and technology company that develops innovative products and services to improve the quality of life for people around the world. As an equal opportunity employer, they firmly believe that diverse voices fuel their innovation and allow them to better serve their users and the community, fostering an environment where every employee feels supported and inspired to achieve individual and common goals. The Sr. AI Inference Systems Engineer role entails leading the optimization of the full inference pipeline for Large Models (LLM, Multimodal), focusing on KV Cache storage strategies, Router architecture design, and collaborative operator optimization to maximize throughput and minimize latency. It also involves conducting in-depth research into the underlying inference logic of various hardware accelerators, evaluating architectural suitability for real-time, batch, and streaming inference scenarios to develop standardized optimization schemes. The engineer will design and implement high-performance inference frameworks, optimizing scheduling and memory management to resolve long-tail issues such as communication latency and load imbalance in distributed inference. Furthermore, the role requires tracking global advancements in inference technology (e.g., compiler optimization, model compression, and hardware fusion) and driving the productization of emerging technologies within production environments. Technical leadership is also a key aspect, involving overcoming key technical bottlenecks in inference optimization, designing technical roadmaps, and mentoring team members to build a robust AI inference technical ecosystem.

Requirements

  • Master’s or Ph.D. in Computer Science, Electronic Engineering, AI, or related fields.
  • Significant professional experience in AI inference optimization or heterogeneous computing.
  • Proficient in at least one AI accelerator architecture; deep understanding of underlying principles, instruction sets, and hardware-specific tuning.
  • Mastery of core inference optimization techniques, including multi-level KV Cache management, Quantization, and Intelligent Routing.
  • Expert in parallel computing and distributed systems; deep understanding of low-level programming models (e.g., CUDA, Triton) and inference engine architectures.
  • Familiar with mainstream deep learning frameworks (e.g., PyTorch, TensorFlow).
  • Stay current with global evolutions in inference technology and computing architectures, with the ability to objectively evaluate different technical paths.
  • Strong analytical and cross-team collaboration skills, with a proven track record of leading complex inference projects to fruition.

Nice To Haves

  • Experience in optimizing ultra-large-scale models is highly preferred.
  • Experience in tuning ultra-large-scale inference clusters or driving AI inference productization.
  • High-level publications or core patents in relevant fields are a plus.

Responsibilities

  • Lead the optimization of the full inference pipeline for Large Models (LLM, Multimodal); focus on KV Cache storage strategies, Router architecture design, and collaborative operator optimization to maximize throughput and minimize latency.
  • Conduct in-depth research into the underlying inference logic of various hardware accelerators; evaluate architectural suitability for real-time, batch, and streaming inference scenarios to develop standardized optimization schemes.
  • Design and implement high-performance inference frameworks; optimize scheduling and memory management to resolve long-tail issues such as communication latency and load imbalance in distributed inference.
  • Track global advancements in inference technology (e.g., compiler optimization, model compression, and hardware fusion); drive the productization of emerging technologies within production environments.
  • Lead efforts to overcome key technical bottlenecks in inference optimization; design technical roadmaps and mentor team members to build a robust AI inference technical ecosystem.

Benefits

  • Sign on payment (evaluated on a case-by-case basis)
  • Relocation package (evaluated on a case-by-case basis)
  • Restricted stock units (evaluated on a case-by-case basis)
  • Medical benefits
  • Dental benefits
  • Vision benefits
  • Life benefits
  • Disability benefits
  • Participation in the Company’s 401(k) plan
  • Up to 15 to 25 days of vacation per year (depending on the employee’s tenure)
  • Up to 13 days of holidays throughout the calendar year
  • Up to 10 days of paid sick leave per year
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service