The Core ML team contributes to frameworks and compilers that support the Google Cloud Platform (GCP) Cloud Tensor Processing Unit (TPU) service and related Machine Learning (ML) models and frameworks. It provides ML infrastructure customers with large-scale, cloud-based access to Google’s first-party ML supercomputers (TPUs and TPU Pods) to run training and inference workloads using PyTorch and JAX. In this role, you will be responsible for the PyTorch ML framework, processes, ecosystem, and model performance, as well as engagements with customers who take advantage of Google’s TPUs to achieve massive scale and speed in their ML workloads.The AI and Infrastructure team is redefining what’s possible. We empower Google customers with breakthrough capabilities and insights by delivering AI and Infrastructure at unparalleled scale, efficiency, reliability and velocity. Our customers include Googlers, Google Cloud customers, and billions of Google users worldwide. We're the driving force behind Google's groundbreaking innovations, empowering the development of our cutting-edge AI models, delivering unparalleled computing power to global services, and providing the essential platforms that enable developers to build the future. From software to hardware our teams are shaping the future of world-leading hyperscale computing, with key teams working on the development of our TPUs, Vertex AI for Google Cloud, Google Global Networking, Data Center operations, systems research, and much more.