fal is building the fastest and most scalable infrastructure for AI inference. Fal Serverless powers 1,300+ endpoints on the fal Marketplace and handles tens of millions of requests per day across production workloads. Enterprises use fal Serverless to deploy, operate, and scale custom AI models without managing infrastructure themselves. Autoscaling, observability, and operational complexity are handled end-to-end by fal’s platform and UI. Serverless began as internal infrastructure built to support fal’s own scale and was released publicly to enterprise customers in early 2025. It is now a core, revenue-driving product with rapidly growing adoption. fal is one of the fastest-growing AI startups, reaching Series D at a $4.5B valuation with a lean team of ~70 employees. You’ll be joining early, with meaningful ownership and direct impact on a foundational product. As a Forward Deployed Engineer on Serverless, you will work directly with enterprise customers to help them deploy, scale, and operationalize their AI workloads on fal. This is a highly technical, customer-facing role where you’ll act as the bridge between Sales, Product and Infrastructure teams. You’ll join customer calls, deeply understand their architecture and needs, and translate those into actionable implementation plans and product requirements. You will be responsible for unblocking customer deployments, accelerating onboarding, and ensuring enterprise accounts successfully reach production fast. This is a role for someone who loves solving real-world engineering problems and wants direct ownership over outcomes that drive revenue and product growth.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed