Applied AI Engineer, Inference

Weights & BiasesSunnyvale, CA
2d$165,000 - $242,000Hybrid

About The Position

The Inference team is responsible for delivering high-performance model serving capabilities that meet the needs of real production workloads. We work at the intersection of model behavior, serving systems, hardware, and customer requirements to improve throughput, latency, reliability, and quality across our inference stack. We are looking for an Applied AI Engineer to help us understand, measure, and improve the real-world performance of our inference platform. In the near term, this role will focus on building and running rigorous benchmarks, profiling model and system behavior, identifying bottlenecks, and driving targeted optimizations for both platform-wide and customer-specific workloads. This role is intentionally scoped around applied performance work in support of the Inference organization. Initial responsibilities center on benchmarking, optimization, and workload-driven research rather than broad ownership of frontier model research agendas. Over time, the scope of the role is expected to broaden as the team and product mature.

Requirements

  • 4+ years of experience in machine learning, systems, performance engineering, or adjacent applied engineering work.
  • Strong programming skills in Python and comfort working in production engineering environments.
  • Experience running empirical evaluations, benchmarks, or experiments and translating results into concrete engineering decisions.
  • Familiarity with LLM inference systems and tools such as vLLM, SGLang, TensorRT-LLM, or similar model-serving stacks.
  • Understanding of the practical tradeoffs involved in latency, throughput, batching, GPU utilization, quantization, and quality regression analysis.
  • Ability to work across model, systems, and product boundaries and stay focused on outcomes that matter for customers.
  • Strong written communication and a bias toward making technical work legible and reproducible for others.

Nice To Haves

  • Experience optimizing inference workloads on modern GPU hardware.
  • Experience with profiling tools such as Nsight Systems, PyTorch profilers, or custom telemetry pipelines.
  • Familiarity with benchmark suites and evaluation frameworks for coding, reasoning, or agent workloads.
  • Experience using real production traces or customer traffic patterns to guide optimization work.
  • Experience balancing model quality and serving performance when evaluating quantization, speculative decoding, or other acceleration strategies.

Responsibilities

  • Build and maintain benchmarking workflows that measure latency, throughput, quality regressions, and cost across priority models and serving configurations.
  • Benchmark our inference stack against realistic customer workloads and external provider baselines to identify performance gaps and improvement opportunities.
  • Profile model-serving behavior across frameworks, runtimes, and hardware configurations to find bottlenecks in prefill, decode, KV cache usage, batching, graph capture, quantization, and related systems.
  • Drive targeted optimization efforts for specific customer and product workloads, including tuning serving configurations, evaluating runtime features, and validating changes against representative traces and benchmarks.
  • Design and run experiments on model-serving techniques such as quantization, speculative decoding, caching strategies, routing, and other inference optimizations, with careful attention to quality and correctness tradeoffs.
  • Partner closely with inference platform engineers to productionize improvements and establish repeatable workflows for performance testing and regression detection.
  • Produce clear technical writeups and recommendations that help the team make better decisions about model configurations, runtime choices, hardware allocation, and customer-specific deployment strategies.
  • Contribute additional applied research over time as needed to support inference quality, optimization, and product performance goals.

Benefits

  • Medical, dental, and vision insurance - 100% paid for by CoreWeave
  • Company-paid Life Insurance
  • Voluntary supplemental life insurance
  • Short and long-term disability insurance
  • Flexible Spending Account
  • Health Savings Account
  • Tuition Reimbursement
  • Ability to Participate in Employee Stock Purchase Program (ESPP)
  • Mental Wellness Benefits through Spring Health
  • Family-Forming support provided by Carrot
  • Paid Parental Leave
  • Flexible, full-service childcare support with Kinside
  • 401(k) with a generous employer match
  • Flexible PTO
  • Catered lunch each day in our office and data center locations
  • A casual work environment
  • A work culture focused on innovative disruption
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service