As an AI/ML Engineer on the Metrics Frameworks team, part of the Simulation, Evaluation, and Data organization, you will be an individual contributor focused on developing and optimizing infrastructure to accelerate autonomous vehicle development, testing, and deployment by creating specialized analytics frameworks and tools. We are seeking experienced software engineers to build analytical frameworks and tools that empower internal users to construct quantitative analysis pipelines and develop metrics. These metrics will support and accelerate feature design, prioritization, and development, as well as evaluate the impact of recently released features. Our analytics framework equally supports road event monitoring, data mining and training, and simulation metrics. The Simulation, Evaluation, and Data organization is dedicated to advancing the development of autonomous vehicles through cutting-edge simulation technologies. The Metrics Frameworks team within this organization focuses on creating, maintaining, and evolving the analytics framework that supports GM's goal of safe, high-performing, and scalable driverless technology. The team delivers robust and scalable tools that facilitate data-driven decision-making throughout the AV feature development lifecycle. We collaborate closely with Simulation Evaluation, Embodied AI, and System and Test Engineering teams to enhance productivity across the organization by developing automation tools and shared libraries for all engineering teams. We are accountable for the performance, reliability, and scalability OKRs of the analytics framework. This includes, but is not limited to, developing customized analytics workflows, improving operational telemetry and dashboards to track relevant KPIs for prioritization, and identifying, designing, and implementing solutions to achieve key results.