At Fluidstack, we’re building the infrastructure for abundant intelligence. We partner with top AI labs, governments, and enterprises - including Mistral, Poolside, Black Forest Labs, Meta, and more - to unlock compute at the speed of light. We’re working with urgency to make AGI a reality. As such, our team is highly motivated and committed to delivering world-class infrastructure. We treat our customers’ outcomes as our own, taking pride in the systems we build and the trust we earn. If you’re motivated by purpose, obsessed with excellence, and ready to work very hard to accelerate the future of intelligence, join us in building what's next. We're hiring a Product Manager to own our AI platform roadmap, including managed inference and agent platforms. You'll define how Fluidstack enables customers to deploy, scale, and optimize LLM inference workloads—from model serving and routing to agent orchestration and compound AI systems. This role requires balancing customer needs for low latency and high throughput with the operational realities of GPU utilization, cost efficiency, and platform reliability. You'll work across engineering, ML research, and go-to-market teams to position Fluidstack against inference-first competitors like Together AI, Fireworks, Baseten, Modal, and Replicate.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed