CoreWeave's AI/ML Platform Services organization is responsible for the orchestration layer that schedules and manages AI workloads across CoreWeave's GPU-accelerated infrastructure. As a Technical Program Manager focused on orchestration and model benchmarking, you will drive programs that define how large-scale AI workloads are scheduled, executed, and evaluated for performance and cost efficiency. You'll partner with engineering, infrastructure, and product teams to evolve CoreWeave's orchestration systems-including Slurm-on-Kubernetes (SUNK) and future orchestrators-while building robust benchmarking and observability frameworks that help customers and internal teams compare model performance, runtime efficiency, and GPU utilization across environments. This role is ideal for someone who thrives at the intersection of distributed systems and AI infrastructure, has deep technical fluency in workload orchestration or scheduling, and excels at building the operational structure and visibility required to scale complex, high-throughput systems.