About The Position

Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward. With your technical expertise you will manage project priorities, deadlines, and deliverables. You will design, develop, test, deploy, maintain, and enhance software solutions. Google’s Core ML organization is seeking software engineers to join the team known for pioneering work with TPUs. The TPU Performance team is responsible for bleeding edge performance and extracting maximum efficiency for AI/ML training and serving workloads. Our team drives optimizations for Cloud TPU and on-prem TPU customers, including the major frontier lab hyperscalers and foundation model builders. Our capabilities cover headroom/roofline analysis, sharding, compiler optimizations, quantization, sparsity, custom kernels, agentic optimizations, and so much more. The AI and Infrastructure team is redefining what’s possible. We empower Google customers with breakthrough capabilities and insights by delivering AI and Infrastructure at unparalleled scale, efficiency, reliability and velocity. Our customers include Googlers, Google Cloud customers, and billions of Google users worldwide. We're the driving force behind Google's groundbreaking innovations, empowering the development of our cutting-edge AI models, delivering unparalleled computing power to global services, and providing the essential platforms that enable developers to build the future. From software to hardware our teams are shaping the future of world-leading hyperscale computing, with key teams working on the development of our TPUs, Vertex AI for Google Cloud, Google Global Networking, Data Center operations, systems research, and much more.

Requirements

  • Bachelor's degree or equivalent practical experience
  • 8 years of experience programming in C++ or Python
  • 5 years of experience testing, and launching software products
  • 5 years of experience with performance, large-scale systems data analysis, visualization tools, or debugging
  • 3 years of experience with software design and architecture

Nice To Haves

  • Master’s degree or PhD in Engineering, Computer Science, or a related technical field
  • 8 years of experience with data structures and algorithms
  • 3 years of experience in a technical leadership role leading project teams and setting technical direction
  • Experience with compiler optimization, code generation, and runtime systems for popular accelerators
  • Understanding of modern GPU, TPU, or other ML accelerator architectures, memory hierarchies, and performance bottlenecks
  • Expertise in tailoring algorithms and ML models to exploit ML accelerator architecture strengths and minimize weaknesses

Responsibilities

  • Identify and maintain ML training and serving benchmarks.
  • Achieve state-of-the-art performance for customer launches, and in case of 3P/OSS models, for competitive benchmark submissions (ML Commons, InferenceX, etc.).
  • Use the benchmarks to identify performance opportunities and directly drive both near-term SOTA (e.g. custom kernels) and out-of-the-box performance (e.g. compiler/runtime optimizations, agentic tooling, auto-sharding) in collaboration with partner teams.
  • Participate in algorithmic innovation, exploiting new TPU hardware features and model-preserving optimizations (e.g. speculative decoding, sparsity, quantization, LoRA, etc.).
  • Participate in co-designing models that are TPU-friendly to showcase model quality at performance of OSS models typically designed on GPUs.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service