CoreWeave's AI/ML Platform Services organization builds the distributed systems that power large-scale inference workloads across our GPU-accelerated cloud. As a Technical Program Manager focused on distributed inference, model onboarding, and runtime optimization, you will drive programs that enable AI models to serve billions of inferences reliably and efficiently. You'll partner closely with engineering, product, and marketing teams to deliver scalable, performant model-serving systems-reducing latency, improving GPU utilization, and making it easier for customers and internal teams to onboard and optimize AI models on CoreWeave. This role is ideal for someone who thrives in complex, cross-functional environments, has deep technical fluency in distributed systems or ML infrastructure or Gen AI, and excels at creating structure, visibility, and execution rhythm across multiple teams.