Lead Software Engineer, Model Serving Platform

SciforiumSan Francisco, CA
3dOnsite

About The Position

This is a rare chance to help architect and lead the development of Sciforium’s next-generation model serving platform , the high-performance engine that will bring a multimodal, highly efficient foundation model to market. As a senior technical leader, you’ll not only build core components yourself but also guide and mentor other engineers , influencing engineering direction, standards, and execution quality. You will learn and shape the full AI stack: from GPU kernels and quantized execution paths to distributed serving, scheduling, and the APIs that power real-time AI applications. If you enjoy deep systems work, thrive on ownership, and want to lead engineers in building foundational AI infrastructure, this role puts you at the center of SciForium’s mission and growth.

Requirements

  • Bachelor’s degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent practical experience
  • 5+ years of experience designing and building scalable, reliable backend systems or distributed infrastructure.
  • Strong understanding of LLM inference mechanics (prefill vs decode, batching, KV cache)
  • Experience with Kubernetes/Ray, Containerization
  • Strong proficiency in C++, Python.
  • Strong debugging, profiling, and performance optimization skills at the system level.
  • Ability to collaborate closely with ML researchers and translate model or runtime requirements into production-grade systems.
  • Effective communication skills and the ability to lead technical discussions, mentor engineers, and drive engineering quality.
  • Comfortable working from the office and contributing to a fast-moving, high-ownership team culture.

Nice To Haves

  • Experience with ML systems engineering, distributed GPU scheduling, open source inference engine like vLLM, Sglang, or TRT-LLM
  • Experience in building large scale ML/MLOps infrastructure
  • Proficiency in CUDA or ROCm and experience with GPU profiling tools
  • Experience at an AI/ML startup, research lab, or Big Tech infrastructure/ML team.
  • Familiarity with multimodal model architectures, raw-byte models, or efficient inference techniques.
  • Contributions to open-source ML or HPC infrastructure

Responsibilities

  • Lead the technical direction of the model serving platform, owning architecture decisions and guiding engineering execution.
  • Build core serving components including execution runtimes, batching, scheduling, and distributed inference systems.
  • Develop high-performance C++ and CUDA/HIP modules, including custom GPU kernels and memory-optimized runtimes.
  • Collaborate with ML researchers to productionize new multimodal models and ensure low-latency, scalable inference.
  • Build Python APIs and services that expose model capabilities to downstream applications.
  • Mentor and support other engineers through code reviews, design discussions, and hands-on technical guidance.
  • Drive performance profiling, benchmarking, and observability across the inference stack.
  • Ensure high reliability and maintainability through testing, monitoring, and engineering best practices.
  • Troubleshoot and resolve complex issues across GPU, runtime, and service layers.

Benefits

  • Medical, dental, and vision insurance
  • 401k plan
  • Daily lunch, snacks, and beverages
  • Flexible time off
  • Competitive salary and equity
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service