About The Position

Our team analyzes inference stack performance across the application, model, and fleet layers to identify bottlenecks and drive faster, cheaper inference. We combine systems profiling, benchmarking, and analysis to understand where time and cost are spent, then turn that understanding into performance optimizations and models that project performance and capacity needs for future launches. In this role, you will model inference performance across application, model, and fleet layers with higher fidelity. You will build cost-to-serve estimates from microbenchmarks and create tools that help cross-functional teams reason about latency, capacity, utilization, and cost tradeoffs.

Requirements

  • Enjoy reasoning from first principles about distributed systems, model inference, and hardware efficiency.
  • Are comfortable working across abstraction layers, from application behavior to kernels, accelerators, networking, and fleet scheduling.
  • Have deep expertise with performance profiling, benchmarking, analysis, and optimization.
  • Enjoy collaborating with engineering and research teams to improve real production systems.

Responsibilities

  • Build and refine performance models that translate microbenchmark results into cost-to-serve estimates.
  • Analyze inference workloads end to end across applications, models, and fleet infrastructure.
  • Enhance tooling to identify bottlenecks across layers for latency and throughput.
  • Partner with other teams to turn performance insights into concrete improvements and project how future changes affect inference.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

No Education Listed

Number of Employees

1-10 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service