We're building the company which will de-risk the largest infrastructure build-out in history. When people finance GPU clusters, the datacenters housing them, and the infrastructure powering them, they need "offtake" - meaning someone has signed a contract to lease the cluster for a period of time before its even built. Financing a GPU cluster is inherently risky, since margins are thin and volumes are huge. Lenders don't want to take on the risk that cluster developers can't repay their loan, and cluster developers really don't want to risk not selling their cluster. As a result, risk is offloaded to the customer using fixed-price long-term contracts. If you don't mitigate this customer risk, there's a bubble. This isn't SaaS anymore - application layer companies sign multi-year contracts for computer and inference, but sell to customers on monthly subscriptions. If you mess up a purchase, it's game over: a minor shift in your revenue growth rate might mean the difference between profit or bankruptcy. But what if companies could exit their contract by selling it back to the market? Otherwise, as AI scales, compute only becomes available to folks who can effectively take on that risk. A 2-person startup in a San Francisco Victorian can't realistically sign a 5-year take or pay contract on $100m supercomputers. But they may be able to buy the month of liquidity that someone else sold back. So that's what we make: a liquid market for GPU offtake. About the Tooling Team We are a small team focused on making SFCompute engineering faster, more observable, and more reliable. Our work spans data infrastructure, developer experience, pre-production environments, and AI tooling — but the common thread isn't any specific domain. It's that we find the problems nobody else owns and make them solved problems. Everyone on this team wears many hats. You'll work across the stack, collaborate with all parts of engineering, and regularly take on problems that don't fit neatly into a job description. If you want a narrow scope and a clear ticket queue, this team isn't it. If you want to have a large, legible impact on a small team building serious infrastructure, read on. The Role We're looking for an Applied AI Engineer to own how AI works for our engineering team. You'll audit how we use tools like Claude Code, identify where AI assistance breaks down or creates friction, and fix it: by writing better skills, rules, and prompts; by improving the context we give AI about our codebase; by changing the workflows that aren't working. This is a new kind of role. It requires someone who thinks rigorously about information flow, understands how large language models reason, and cares enough about developer experience to go fix things rather than just document them.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed