This role involves conducting in-depth research into the underlying hardware logic of various AI accelerators, evaluating their power-efficiency and suitability for Large Language Model (LLM) inference and training. The engineer will design and optimize high-performance operator libraries for large-scale cloud computing environments, addressing latency issues in hardware scheduling, memory management, and distributed communication. A key responsibility is defining the interconnect architecture to drive virtualization, standardized access, and efficient pooling of heterogeneous computing resources in the cloud. Additionally, the role requires monitoring global trends in semiconductors and accelerators, performing feasibility studies, and experimental validation for implementing emerging technologies within cloud infrastructure.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Number of Employees
5,001-10,000 employees