You'll design and build the core infrastructure that powers AI inference across Cloudflare's global network — real-time voice, frontier open LLMs, and customer-deployed models running on a heterogeneous fleet of GPUs and next-generation accelerators in hundreds of cities worldwide. Working alongside AI/ML engineers, hardware partners, and Cloudflare product teams, you'll solve hard problems in distributed systems and high-performance computing: sub-second model cold starts, multi-accelerator workload scheduling, efficient KV cache management, and a model deployment platform serving both Cloudflare and customers bringing their own models. We're building an AI inference platform embedded in the fabric of the internet — something that doesn't exist yet — and this role puts you at the center of it. We're looking for high-agency systems engineers who are energized by foundational infrastructure problems and want to define how AI runs at the edge of the network.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed
Number of Employees
501-1,000 employees