Senior MLOps Engineer - Production

Luma AI, Inc.Palo Alto, CA
46d

About The Position

Luma's mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change. This is a rare opportunity to build the foundational infrastructure that powers our large-scale multimodal models. We believe that reliable, high-performance infrastructure is the single biggest differentiating factor between success and failure in achieving our mission. You will be a foundational member of the team, designing the critical systems that allow us to train and serve next-generation AI to millions of users.

Requirements

  • 5+ years of professional engineering experience with deep, hands-on proficiency in Python and complex distributed systems architecture.
  • Extensive, practical experience building and managing systems at scale, specifically with queues, scheduling, traffic-control, and fleet management.
  • Deep expertise in our core infrastructure stack: Linux, Docker, and Kubernetes.
  • Strong experience with Redis, S3-compatible storage, and public cloud platforms (AWS).

Nice To Haves

  • Experience with high-performance, large-scale ML systems (managing >100 GPUs).
  • Deep familiarity with PyTorch and CUDA.
  • Experience with modern networking stacks, including RDMA (RoCE, Infiniband, NVLink).
  • Familiarity with FFmpeg and multimedia processing pipelines.

Responsibilities

  • Architect end-to-end model serving pipelines and integrate new model architectures from our research team into our core, high-throughput inference engine.
  • Build robust and sophisticated scheduling systems to manage jobs based on cluster availability and user priority, ensuring we optimally leverage thousands of expensive GPU resources.
  • Design and implement dynamic, traffic-based systems for hotswapping models on our GPU workers to maximize fleet efficiency and meet product SLOs.
  • Own the end-to-end CI/CD pipelines, including creating a resilient artifact store to manage all model checkpoints across multiple versions and providers.
  • Develop and maintain user-friendly APIs and interaction patterns that empower our product and research teams to ship groundbreaking features at high velocity.
  • Manage and optimize our complex inference workloads at scale, operating across multiple clusters and hardware providers.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

Publishing Industries

Education Level

No Education Listed

Number of Employees

51-100 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service