Engineering Manager, Inference ML Runtime

Cerebras SystemsToronto, ON

About The Position

The Inference ML Engineering team at Cerebras builds the runtime, APIs, and systems that power the fastest generative AI inference platform in the world. As an Engineering Manager, Inference ML Runtime, you will lead a team responsible for designing and scaling the systems that enable seamless execution of state-of-the-art AI models on Cerebras hardware. You will operate at the intersection of machine learning, distributed systems, and high-performance runtime engineering, translating cutting-edge research into production-ready infrastructure to serve a variety of text-only and multimodal models. This role combines technical leadership, people management, and execution ownership, with direct impact on Cerebras’ core inference platform.

Requirements

  • 8+ years of experience in: large-scale software engineering; ML systems or distributed systems.
  • 2+ years of engineering management experience.
  • Strong programming skills in: Python (production systems); C++ (performance-critical systems).
  • Experience building and scaling large-scale inference systems (LLMs or multimodal).
  • Experience working with cloud infrastructures and following best-practices for building scalable microservices and applications.

Nice To Haves

  • Experience with: LLM serving frameworks (e.g., vLLM, TensorRT-LLM, SGLang); PyTorch and deep learning frameworks; distributed systems and high-performance computing.
  • Familiarity with: ML runtime systems; model execution pipelines; performance optimization for AI workloads.

Responsibilities

  • Own the architecture and evolution of the ML inference runtime and serving systems.
  • Guide the design of: high-throughput, low-latency inference pipelines; multimodal model execution (text, image, audio, video); scalable serving infrastructure for concurrent workloads.
  • Partner with cloud, compiler, core runtime, hardware, and ML teams to optimize end-to-end performance.
  • Build, manage, and grow a team of ML systems and infrastructure engineers.
  • Provide technical direction, mentorship, and career development.
  • Foster a culture of ownership, velocity, and engineering excellence.
  • Recruit top talent in ML systems, distributed systems, and runtime engineering.
  • Drive execution of complex, cross-functional initiatives across: ML engineering; compiler/runtime teams; cloud and infrastructure teams.
  • Own delivery of features such as: advanced inference capabilities (structured outputs, sampling strategies); heterogeneous model types, including test and multimodal; performance optimization (latency, throughput, memory efficiency); observability and reliability across the inference stack.
  • Ensure high-quality releases through strong testing, validation, and operational rigor.
  • Scale Cerebras’ inference platform to handle large volumes of concurrent requests at very fast speed
  • Drive improvements in: latency; throughput; compute efficiency.
  • Identify and prioritize technical debt and system bottlenecks.
  • Maintain Cerebras’ industry-leading inference speed advantage.
  • Partner with: ML researchers (model enablement); compiler teams (model execution optimization); cloud/platform teams (deployment and scaling).
  • Act as a bridge between research, infrastructure, and production systems.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service