About The Position

Accelerated Linear Algebra (XLA) powers all ML workloads at Google. It is also a choice of most external foundation model producers who value performance and reliability at large-scale. It is the most advanced ML compiler in the industry. In this role, you will specialize in scaling capabilities of the compiler essential for supporting increasing model sizes. Your contributions as part of the team will be critical for achieving best performance and reliability for the most important and extremely large ML programs at Google and top external AI companies. You will work with the world experts in ML hardware, compiler and performance optimization. Our team operates across the layers of the compiler. You will have an opportunity to contribute across the stack from the higher level of rewrites to the low level emitters exercising specialized hardware features.

Requirements

  • Bachelor’s degree or equivalent practical experience.
  • 2 years of experience with coding in C++, or 1 year of experience with an advanced degree.
  • 1 year of experience with low-level programming.
  • 1 year of experience working with hardware.

Nice To Haves

  • Master's degree or PhD in Computer Science, or a related technical field.
  • 2 years of experience with low level ML accelerator programming, compiler, or others close to hardware performance programming.
  • Experience in profiling workloads, identifying and introducing performance optimization.
  • Experience with high-performance C++.
  • Experience with Multi-Level Intermediate Representation (MLIR) or Low Level Virtual Machines (LLVM).

Responsibilities

  • Deliver compiler parallelization features and optimization techniques for TPU back-end necessary for large-scale workloads.
  • Contribute to collective operation lowering/implementation on TPU platform.
  • Develop compiler optimization techniques at lower level and throughout the compiler stack.
  • Analyze upcoming and existing features in TPU architectures and leverage them for most optimal horizontal scaling performance.
  • Collaborate with ML Performance and research teams on achieving roofline performance for the most critical workloads. Build compiler related tools for debugging and preventing scaling issues and improving engineering experience.

Benefits

  • bonus
  • equity
  • benefits
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service