Staff Software Engineer, TPU Performance

GoogleMountain View, CA
17h$197,000 - $291,000

About The Position

Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward. Google’s Core Machine Learning (ML) organization is seeking software engineers to join the team known for pioneering work with Tensor Processing Units (TPUs). In this role, you will work on Gemini, as well as industry leading open-source models, to understand model architecture and optimize the performance of these Machine Learning (ML) models on TPU systems for both Just After eXecution (JAX) and PyTorch platforms. You will improve the performance of ever-evolving ML workloads, achieving results. These fundamental efforts will influence next-generation (next-gen) TPU architectures via partnerships, ensuring performance for Gemini and Open-Source Software (OSS) Machine Learning (ML) models. The Core team builds the technical foundation behind Google’s flagship products. We are owners and advocates for the underlying design elements, developer platforms, product components, and infrastructure at Google. These are the essential building blocks for excellent, safe, and coherent experiences for our users and drive the pace of innovation for every developer. We look across Google’s products to build central solutions, break down technical barriers and strengthen existing systems. As the Core team, we have a mandate and a unique opportunity to impact important technical decisions across the company.

Requirements

  • Bachelor’s degree or equivalent practical experience.
  • 8 years of experience in software development.
  • 5 years of experience testing, and launching software products
  • 3 years of experience with software design and architecture.
  • 5 years of experience with one or more of the following: Speech/audio (e.g., technology duplicating and responding to the human voice), reinforcement learning (e.g., sequential decision making), ML infrastructure, or specialization in another ML field.
  • 5 years of experience with ML design and ML infrastructure (e.g., model deployment, model evaluation, data processing, debugging, fine tuning).

Nice To Haves

  • Master’s degree or PhD in Engineering, Computer Science, or a related technical field.
  • 8 years of experience with data structures and algorithms.
  • Experience with machine learning, compiler optimization, code generation, and runtime systems for GPU architectures (OpenXLA, MLIR, Triton, etc).
  • Experience in tailoring algorithms and ML models to exploit ML accelerator architecture strengths and minimize weaknesses.
  • Experience in low-level GPU programming (CUDA, OpenCL, etc.) and performance tuning techniques.
  • Understanding of modern A Graphics Processing Unit (GPU), TPU or other ML accelerator architectures, memory hierarchies, and performance bottlenecks.

Responsibilities

  • Identify and maintain ML training and serving benchmarks that are representative to Google production and broader ML industry.
  • Achieve performance for customer launches, and in case of Third-Party/Open-Source Software (3P/OSS) models, for engaged benchmark submissions ML Commons, InferenceMAX, et cetera).
  • Use the benchmarks to identify performance opportunities and drive out-of-the-box performance toward improving the compiler, runtime, etc in collaboration with those teams.
  • Engage with Google product teams and researchers to solve their performance problems (e.g., onboard new ML models and products on Google new TPU hardware, enabling larger models (giant models) to train efficiently on a very large-scale (that is, thousands of TPUs.)).
  • Analyze performance and efficiency metrics to identify bottlenecks, design, and implement solutions at Google fleet-wide scale.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service