About The Position

At Netflix, our mission is to entertain the world. Together, we are writing the next episode - pushing the boundaries of storytelling, global fandom and making the unimaginable a reality. We are a dream team obsessed with the uncomfortable excitement of discovering what happens when you merge creativity, intuition and cutting-edge technology. Come be a part of what’s next. Machine Learning (ML) is core to that experience. From personalizing the home page to optimizing studio operations and powering new types of content, ML helps us entertain the world faster and better. The Machine Learning Platform (MLP) organization builds the scalable, reliable infrastructure that accelerates every ML practitioner at Netflix. Within MLP, the Offline Inference team owns the batch-prediction layer—enabling practitioners to generate, store, and serve predictions for various models, including LLMs, computer-vision systems, and other foundation models. One of our most critical customer groups today is the content and studio ML practitioners in the company, whose work influences what we create and how we produce movies and shows you see when you log into the Netflix app. The Opportunity We’re looking for a talented Software Engineer to join the newly formed Offline Inference team. You will design, build, and operate next-generation systems that run large-scale batch inference workloads—from minutes to multi-day jobs—while delivering a friction-free, self-service experience for ML practitioners across Netflix. Success in this role means not only building robust distributed systems, but also deeply understanding the ML development lifecycle to build platforms that truly accelerate our users.

Requirements

  • Hands-on experience with ML engineering or production systems involving training or inference of deep-learning models.
  • Proven track record of operating scalable infrastructure for ML workloads (batch or online).
  • Proficiency in one or more modern backend languages (e.g. Python, Java, Scala).
  • Production experience with containerization & orchestration (Docker, Kubernetes, ECS, etc.) and at least one major cloud provider (AWS preferred).
  • Comfortable with ambiguity and working across multiple layers of the tech stack to execute on both 0-to-1 and 1-to-100 projects
  • Commitment to operational best practices—observability, logging, incident response, and on-call excellence.
  • Excellent written and verbal communication skills; effective collaboration across distributed teams and time zones.
  • Comfortable working in a team with peers and partners distributed across (US) geographies & time zones.

Nice To Haves

  • Deep understanding of real-world ML development workflows and close partnership with ML researchers or modeling engineers.
  • Familiarity with cloud-based AI/ML services (e.g., SageMaker, Bedrock, Databricks, OpenAI, Vertex) or open-source stacks (Ray, Kubeflow, MLflow).
  • Experience optimizing inference for large language models, computer-vision pipelines, or other foundation models (e.g., FSDP, tensor/pipeline parallelism, quantization, distillation).
  • Open-source contributions, patents, or public speaking/blogging on ML-infrastructure topics.

Responsibilities

  • Build developer-friendly APIs, SDKs, and CLIs that let researchers and engineers—experts and non-experts alike—submit and manage batch inference jobs with minimal effort, particularly in the domain of content and media
  • Design, implement, and operate distributed services that package, schedule, execute, and monitor batch inference workflows at massive scale.
  • Instrument the platform for reliability, debuggability, observability, and cost control; define SLOs and share an equitable on-call rotation
  • Foster a culture of engineering excellence through design reviews, mentorship, and candid, constructive feedback

Benefits

  • Health Plans
  • Mental Health support
  • a 401(k) Retirement Plan with employer match
  • Stock Option Program
  • Disability Programs
  • Health Savings and Flexible Spending Accounts
  • Family-forming benefits
  • Life and Serious Injury Benefits
  • paid leave of absence programs
  • flexible time off
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service