About The Position

Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. In this role you will be a key engineer contributing towards feature development and performance optimizations for LLM inference on TPUs.The Google Cloud AI Research team addresses AI challenges motivated by Google Cloud’s mission of bringing AI to tech, healthcare, finance, retail and many other industries. We work on a range of unique problems focused on research topics that maximize scientific and real-world impact, aiming to push the state-of-the-art in AI and share findings with the broader research community. We also collaborate with product teams to bring innovations to real-world impact that benefits our customers.

Requirements

  • Bachelor’s degree or equivalent practical experience.
  • 5 years of experience programming in Python or C++.
  • 3 years of experience testing, maintaining, or launching software products, and 1 year of experience with software design and architecture.
  • 3 years of experience with large language model (LLM) concepts, algorithms, and experience designing NLP solutions.
  • 3 years of experience with ML infrastructure (e.g., model deployment, model evaluation, optimization, data processing, and debugging).

Nice To Haves

  • Experience with modern deep learning toolkits (e.g., JAX).
  • Experience developing and optimizing LLMs.
  • Experience working on GPUs or TPUs.
  • Experience with latency, memory, compute, and quality tradeoffs as they apply to ML model architectures.
  • Experience with low-level ML model optimization, new architectures, and tools.

Responsibilities

  • Optimize large-scale models for single- and multi-host inference.
  • Explore optimizations such as quantization and shardings.
  • Add features to process long contexts in large language models (LLMs).
  • Collaborate with the machine learning (ML) research, ML performance, model optimization tooling, and other optimization teams.
  • Maintain the open-sourced vLLM-based inference stack for Tensor Processing Units (TPUs).

Benefits

  • bonus
  • equity
  • benefits
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service