Sr. Cloud AI Infrastructure Engineer

TencentPalo Alto, CA
Onsite

About The Position

This role involves conducting in-depth research into the underlying hardware logic of various AI accelerators, evaluating their power-efficiency and suitability for Large Language Model (LLM) inference and training. The engineer will design and optimize high-performance operator libraries for large-scale cloud computing environments, addressing latency issues in hardware scheduling, memory management, and distributed communication. A key responsibility is defining the interconnect architecture to drive virtualization, standardized access, and efficient pooling of heterogeneous computing resources in the cloud. Additionally, the role requires monitoring global trends in semiconductors and accelerators, performing feasibility studies, and experimental validation for implementing emerging technologies within cloud infrastructure.

Requirements

  • Education: Master’s or Ph.D. degree in Computer Engineering, Electronic Engineering, Microelectronics, or a related field.
  • Core Expertise: Expertise in GPGPU architectures or other mainstream AI accelerator architectures.
  • Programming & Frameworks: Proficient in parallel computing frameworks; deep understanding of low-level operator development languages (e.g., CUDA, Triton).
  • Network & Distributed Systems: Solid understanding of large-scale distributed systems, cluster topologies (e.g., Fat-tree, Torus), and high-performance network protocols.
  • Industry Insight: Familiar with the architectural evolution of global leading computing enterprises; ability to objectively analyze the technical pros/cons and engineering challenges of different architectural paths.

Nice To Haves

  • Experience: Experience in the application, optimization, or architectural design of ultra-large-scale accelerator clusters is preferred.
  • Framework Optimization: Experience in the low-level adaptation and performance tuning of mainstream deep learning frameworks (e.g., PyTorch, TensorFlow) is preferred.

Responsibilities

  • Architecture Research: Conduct in-depth research into the underlying hardware logic of various AI accelerators; evaluate the power-efficiency ratio and suitability of different heterogeneous architectures in the context of Large Language Model (LLM) inference and training.
  • Operator & Performance Optimization: Design and optimize high-performance operator libraries for large-scale cloud computing environments; resolve long-tail latency issues in hardware scheduling, memory management, and distributed communication.
  • Interconnect Architecture Definition: Define the interconnect architecture; drive the virtualization, standardized access, and efficient pooling of heterogeneous computing resources in the cloud.
  • Technology Trend Analysis: Monitor global trends in semiconductors and accelerators; perform feasibility studies and experimental validation for the implementation of emerging technologies within cloud infrastructure.

Benefits

  • sign on payment
  • relocation package
  • restricted stock units
  • medical benefits
  • dental benefits
  • vision benefits
  • life and disability benefits
  • participation in the Company’s 401(k) plan
  • up to 15 to 25 days of vacation per year (depending on the employee’s tenure)
  • up to 13 days of holidays throughout the calendar year
  • up to 10 days of paid sick leave per year
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service