About The Position

Nebius builds the infrastructure serious AI teams run on — GPU clusters, inference runtimes, agent development environments, data pipelines — all of it purpose-built for the most demanding AI workloads. What we are now building is the ecosystem function that ensures the best AI companies choose to build on us, integrate with us, and stay. As a Forward Deployed Engineer, Ecosystem, you will sit at the intersection of solution architecture and hands-on engineering. You assess how partner products actually work on our stack, define the reference architecture for each integration, build the working prototype that proves it, and translate what you find into product requirements that shape what Nebius ships next.

Requirements

  • 6+ years of hands-on engineering experience in AI application development, ML systems, or AI infrastructure
  • Deep working knowledge of the AI developer stack — LLM APIs, inference runtimes, orchestration frameworks, vector databases, RAG architectures, agentic pipelines — built through shipping, not reading
  • Hands-on experience with agentic frameworks such as LangChain, LangGraph, CrewAI, AutoGen, or equivalent
  • Strong Python programming skills and comfort prototyping end-to-end AI systems quickly
  • Experience defining reference architectures and technical patterns — not just implementing them
  • Proven ability to move from idea to working prototype fast — you have shipped meaningful things under time pressure and found it energizing
  • Experience building integrations across APIs and developer platforms — you understand where the complexity actually lives
  • Comfortable working across both external partner engineering teams and internal Nebius product and engineering teams simultaneously
  • Strong technical communication — you can explain architecture decisions and integration findings to a founding CTO and a non-technical partner lead in the same day

Nice To Haves

  • Experience with inference frameworks and optimization: vLLM, SGLang, TensorRT-LLM, speculative decoding, quantization, batching, KV-cache routing
  • Familiarity with NVIDIA's software stack: CUDA, TensorRT, NeMo, or equivalent
  • Experience with multimodal AI models — vision-language, speech, or structured data
  • Won or placed at major AI hackathons in the past 12 months
  • Worked as a developer advocate, solutions engineer, or technical partner manager at a leading AI platform or developer tooling company
  • Been an early engineer at a YC-backed AI startup — you built the product under real constraints
  • Open source projects or public demos with meaningful community adoption
  • Proficiency with DevOps tools: Docker, Kubernetes, Git

Responsibilities

  • Design and prototype integrations between partner products and the Nebius platform — fast, hands-on, and technically sound
  • Define reference architectures for partner integrations — not just what works, but how it should work at scale and in production
  • Scope partner architectures against our platform — how does this product actually work on our stack, where does it snap together, where does it break
  • Build production-quality proof-of-concepts across the AI stack including agentic pipelines, RAG architectures, inference optimization patterns, and multi-model orchestration
  • Produce working proof-of-concepts that serve as the starting point for product creation — not a requirements doc, a working thing
  • Maintain a library of reference architectures and integration patterns that internal product and engineering teams can build from
  • Work directly with partner engineering teams to scope, prototype, and progress integrations
  • Assess partner architectures honestly — if the integration is painful, that is signal; if it snaps together in a weekend, that is also signal; report both
  • Provide technical guidance to partners on how to maximize performance, reliability, and cost efficiency on Nebius infrastructure
  • Produce technical scoping that gives your pod partner and internal teams a clear picture of integration feasibility, depth, and complexity
  • Translate external integration findings into actionable product requirements for Nebius platform teams
  • Work with ISV partners, SI teams, and field teams to scale solution adoption and drive revenue once a solution is ready
  • Surface recurring architectural patterns and integration gaps to inform platform roadmap decisions
  • Participate in platform planning as the technical voice of what you are seeing and building in the field
  • Represent Nebius at hackathons, in open source communities, and at technical events
  • Build in public — demos, reference architectures, and integrations that establish Nebius as the platform serious AI builders choose
  • Stay current with the AI tooling ecosystem — you know what shipped last week and what it means for our stack

Benefits

  • 100% company-paid medical, dental, and vision coverage for employees and families.
  • Up to 4% company match with immediate vesting.
  • 20 weeks paid for primary caregivers, 12 weeks for secondary caregivers.
  • Up to $85/month for mobile and internet.
  • Company-paid short-term, long-term, and life insurance coverage.
  • Competitive salary and comprehensive benefits package.
  • Opportunities for professional growth within Nebius.
  • Flexible working arrangements.
  • A dynamic and collaborative work environment that values initiative and innovation.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service