About The Position

Nebius Serverless AI is our consumption-based compute platform for running AI workloads — training jobs, inference endpoints, and interactive development environments — without managing infrastructure. Users submit containerized workloads via CLI or UI, access GPU compute with pay-per-second billing, and the platform handles provisioning, lifecycle, and cleanup. We launched GA in Q1 2026 and are now scaling toward 1,000+ users while building the next generation of capabilities: autoscaling, multi-node distributed workloads, and developer-first tooling. We are looking for a Senior Technical Product Manager to join the Serverless AI product team. Together you will divide ownership across the product surface — but individually, you will own your areas with full autonomy. This is not a role where you write requirements and hand them off. You will be the person who understands container runtimes, GPU scheduling, cold start optimization, and inference serving deeply enough to make correct technical trade-offs — and also the person who talks to customers, shapes the CLI experience, defines pricing, and drives adoption. We are building the next generation of AI cloud — infrastructure designed from the ground up for GPU-intensive workloads, not retrofitted from legacy cloud. This is a lean, high-impact team where every person shapes the product directly. You need to be the kind of PM who amplifies engineering output by making the right calls on what to build and what to skip.

Requirements

  • You have built, shipped, and iterated on infrastructure or platform products used by developers or ML engineers. Not consumer apps. Not dashboards. Infrastructure.
  • You understand containers at a practical level — Docker, image registries, container runtimes, resource limits, networking. You've debugged why a container won't start, why GPU isn't visible inside it, or why a mount isn't working.
  • You have working knowledge of GPU computing for AI/ML: what GPU types exist and when to use them, how training and inference workloads differ in resource requirements, what vLLM / TensorRT-LLM / Triton are and why they matter.
  • You can read a CLI reference and know if it's well-designed. You've shaped developer-facing APIs, CLIs, or SDKs.
  • You have run real customer discovery — not surveys, but technical conversations with engineers where you learned something that changed your product direction.
  • You have 3+ years of product management experience in cloud infrastructure, AI/ML platforms, or developer tools.
  • Ability to whiteboard a workload lifecycle (submit → schedule → provision → execute → cleanup) and identify failure modes at each step.
  • Understanding of autoscaling trade-offs: scale-to-zero vs. warm pools, scaling metrics (queue depth, latency, utilization), cold start implications.
  • Familiarity with inference serving concepts: batching, model loading, quantization, KV-cache management, multi-model serving.
  • Understanding of distributed training concepts: data parallelism, model parallelism, communication overhead, checkpointing.
  • Ability to reason about pricing models: per-second vs. per-request vs. per-token, and how pricing interacts with product architecture.

Nice To Haves

  • Experience at a serverless or GPU cloud company.
  • Hands-on ML engineering background — you've trained models, deployed inference endpoints, or built ML pipelines yourself.
  • Experience with Kubernetes for ML workloads (Kubeflow, KServe, Ray Serve) and understanding of why many ML teams want to avoid it.
  • Prior experience building a product from early stage to scale in a fast-growing market.
  • Background in systems engineering, distributed systems, or site reliability engineering.

Responsibilities

  • Co-own the Serverless AI product roadmap — Jobs, Endpoints, and DevPods — taking primary ownership of specific product areas while collaborating closely with the other PM on shared priorities and cross-cutting decisions.
  • Write detailed, technically precise PRDs that engineering teams can execute against. Our PRDs specify CLI syntax, API contracts, state machines, and billing models — not abstract feature descriptions.
  • Make build/buy/defer decisions on capabilities like autoscaling, multi-node orchestration, HTTPS termination, secret injection, and health checking based on customer signal and strategic priorities.
  • Understand the full workload lifecycle: container image pull → VM provisioning → GPU attachment → workload execution → cleanup — well enough to identify bottlenecks and propose solutions.
  • Evaluate technical trade-offs in areas like container cold start optimization (image caching, snapshot restore, warm pools), GPU scheduling and bin-packing, and storage mount performance.
  • Work directly with engineers on architecture decisions for distributed training support, endpoint autoscaling policies, and fault tolerance mechanisms.
  • Stay current on the fast-moving serverless GPU infrastructure space — new inference frameworks (vLLM, TensorRT-LLM, SGLang), container runtimes, orchestration approaches — and translate trends into product direction.
  • Run customer discovery and feedback sessions with ML engineers and platform teams at AI startups and enterprises. Turn qualitative insight into specific product actions.
  • Analyze usage data, activation funnels, and churn patterns to identify where users get stuck and what features drive retention.
  • Track market dynamics, emerging technologies, and industry trends to inform product strategy and ensure Nebius stays ahead of where the market is heading.
  • Define and iterate on pricing, packaging, and tier strategy for Serverless AI.
  • Own the technical content strategy: quickstart guides, tutorials, reference architectures, and example workloads that reduce time-to-first-job.
  • Partner with marketing on developer-focused campaigns, webinars, and conference presence.
  • Work with Solution Architects and Sales to qualify serverless-fit opportunities and support technical evaluations.

Benefits

  • Competitive salary and comprehensive benefits package.
  • Opportunities for professional growth within Nebius.
  • Flexible working arrangements.
  • A dynamic and collaborative work environment that values initiative and innovation.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service