Tencent is a world-leading internet and technology company that develops innovative products and services to improve the quality of life for people around the world. As an equal opportunity employer, they firmly believe that diverse voices fuel their innovation and allow them to better serve their users and the community, fostering an environment where every employee feels supported and inspired to achieve individual and common goals. The Sr. AI Inference Systems Engineer role entails leading the optimization of the full inference pipeline for Large Models (LLM, Multimodal), focusing on KV Cache storage strategies, Router architecture design, and collaborative operator optimization to maximize throughput and minimize latency. It also involves conducting in-depth research into the underlying inference logic of various hardware accelerators, evaluating architectural suitability for real-time, batch, and streaming inference scenarios to develop standardized optimization schemes. The engineer will design and implement high-performance inference frameworks, optimizing scheduling and memory management to resolve long-tail issues such as communication latency and load imbalance in distributed inference. Furthermore, the role requires tracking global advancements in inference technology (e.g., compiler optimization, model compression, and hardware fusion) and driving the productization of emerging technologies within production environments. Technical leadership is also a key aspect, involving overcoming key technical bottlenecks in inference optimization, designing technical roadmaps, and mentoring team members to build a robust AI inference technical ecosystem.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior