The AV ML Infra team at GM builds ML infrastructure designed to meet the unique demands of AI and ML innovation, supporting a wide range of use cases across teams such as Embodied AI, Simulation, Data Science, and more. We enable scalable and efficient ML experimentation, enhance the productivity of ML engineers, and drive the adoption of cutting-edge ML techniques. Our AV ML infrastructure includes: AI Validation & Inference : Ensures robust model performance by running large-scale simulation workloads and managing reliable ML inference pipelines. ML Compute : Streamlines and optimizes large-scale ML training and inference across cloud and on-prem compute resources. AV Pipelines & Lineage : Automates ML workflows via Orchestration platform while tracking data and model lineage across diverse infrastructures, accelerating engineering velocity and ensuring reproducibility. Together, these tools and systems empower GM to tackle the complexities of autonomous driving technology and expedite our path to commercialization. As an intern, you will help develop and optimize our A V ML Infra by improving data processing pipelines, scheduling , accelerating model training and inference workflows, and/or enhancing testing infrastructure like Carbench /HIL. You will support tool development, instrumentation, and system monitoring to boost reliability, reduce latency, and increase iteration speed for autonomous driving performance.