We're hiring a Model Performance Engineer to own the speed, cost, and reliability of our model inference stack, and to build the fine-tuning infrastructure that makes the rest of the AI team faster. This is not a research role. You'll be optimizing real systems serving millions of meetings — choosing between quantization trade-offs, debugging speculative decoding, or figuring out why one GPU family's tail latency explodes at high concurrency while another stays stable. You'll own two things: 1. Inference performance. You'll make our models faster and cheaper — speculative decoding, quantization, serving configuration, GPU selection, batching strategies, cold start mitigation, adapter swapping. Our traffic is extremely spiky (meetings end in 30-minute blocks), so you need to think about throughput curves. Our team greatly values offering a fast product. 2. Fine-tuning pipelines. The AI team constantly fine-tunes models for new tasks — distilling large teacher models for classification, training adapters for domain-specific behavior, DPO for preference tuning. Right now each project reinvents the training loop. You'll build repeatable infrastructure so an AI Engineer can go more quickly from dataset to deployed model.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Education Level
No Education Listed
Number of Employees
11-50 employees