Senior Systems Engineer, Workers AI

CloudflareSan Francisco, CA
1dHybrid

About The Position

You'll design and build the core infrastructure that powers AI inference across Cloudflare's global network — real-time voice, frontier open LLMs, and customer-deployed models running on a heterogeneous fleet of GPUs and next-generation accelerators in hundreds of cities worldwide. Working alongside AI/ML engineers, hardware partners, and Cloudflare product teams, you'll solve hard problems in distributed systems and high-performance computing: sub-second model cold starts, multi-accelerator workload scheduling, efficient KV cache management, and a model deployment platform serving both Cloudflare and customers bringing their own models. We're building an AI inference platform embedded in the fabric of the internet — something that doesn't exist yet — and this role puts you at the center of it. We're looking for high-agency systems engineers who are energized by foundational infrastructure problems and want to define how AI runs at the edge of the network.

Requirements

  • Experience in systems engineering, with a focus on distributed, high-performance systems.
  • Expert proficiency in Rust programming, particularly in an asynchronous environment.
  • Deep understanding and hands-on experience with relevant networking and application protocols (e.g., TCP, HTTP, WebSocket).
  • Experience with scaling and performance optimization techniques, including load balancing and caching in a distributed environment.

Nice To Haves

  • Demonstrable experience with container orchestration platforms, specifically Kubernetes and/or Nomad.
  • Familiarity with the challenges and architectures involved in large-scale inference serving (e.g., LLM and diffusion models).

Responsibilities

  • Develop and maintain core components of the serverless inference platform to ensure high availability and scalability for Cloudflare users.
  • Optimize the model scheduling system to significantly increase efficiency and resource utilization across our inference infrastructure.
  • Implement improvements to the inference request routing logic to enhance overall performance and reduce latency for end-users.
  • Drive significant, measurable improvements in the platform's reliability and resilience by identifying and mitigating systemic risks.
  • Expand and refine the observability stack, including metrics, logging, and tracing, and fine-tune alerts to proactively identify and resolve production issues.
  • Lead complex, cross-functional technical projects from initial concept and design through final deployment and operationalization.
  • Act as a mentor to junior engineers and actively contribute to cultivating a strong, collaborative engineering culture within the team.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

501-1,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service