Inferact's mission is to grow vLLM as the world's AI inference engine and accelerate AI progress by making inference cheaper and faster. Founded by the creators and core maintainers of vLLM, we sit at the intersection of models and hardware—a position that took years to build. We're looking for an infrastructure engineer to build the distributed systems that power inference at global scale. You'll design and implement the foundational layers that enable vLLM to serve models across thousands of accelerators with minimal latency and maximum reliability. Tomorrow, deploying a frontier model at scale should be as straightforward as spinning up a serverless database. The complexity doesn't disappear as it gets absorbed into the infrastructure you're building.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior