Applied AI Engineer, Inference

Weights & BiasesSunnyvale, CA
Hybrid

About The Position

CoreWeave, the AI Hyperscaler™, acquired Weights & Biases to create the most powerful end-to-end platform to develop, deploy, and iterate AI faster. Since 2017, CoreWeave has operated a growing footprint of data centers covering every region of the US and across Europe, and was ranked as one of the TIME100 most influential companies of 2024. By bringing together CoreWeave’s industry-leading cloud infrastructure with the best-in-class tools AI practitioners know and love from Weights & Biases, we’re setting a new standard for how AI is built, trained, and scaled. The integration of our teams and technologies is accelerating our shared mission: to empower developers with the tools and infrastructure they need to push the boundaries of what AI can do. From experiment tracking and model optimization to high-performance training clusters, agent building, and inference at scale, we’re combining forces to serve the full AI lifecycle — all in one seamless platform. Weights & Biases has long been trusted by over 1,500 organizations — including AstraZeneca, Canva, Cohere, OpenAI, Meta, Snowflake, Square,Toyota, and Wayve — to build better models, AI agents and applications. Now, as part of CoreWeave, that impact is amplified across a broader ecosystem of AI innovators, researchers, and enterprises. As we unite under one vision, we’re looking for bold thinkers and agile builders who are excited to shape the future of AI alongside us. If you're passionate about solving complex problems at the intersection of software, hardware, and AI, there's never been a more exciting time to join our team. What You'll Do Description of the team: The Inference team is responsible for delivering high-performance model serving capabilities that meet the needs of real production workloads. We work at the intersection of model behavior, serving systems, hardware, and customer requirements to improve throughput, latency, reliability, and quality across our inference stack. About the role: We are looking for an Applied AI Engineer to help us understand, measure, and improve the real-world performance of our inference platform. In the near term, this role will focus on building and running rigorous benchmarks, profiling model and system behavior, identifying bottlenecks, and driving targeted optimizations for both platform-wide and customer-specific workloads. This role is intentionally scoped around applied performance work in support of the Inference organization. Initial responsibilities center on benchmarking, optimization, and workload-driven research rather than broad ownership of frontier model research agendas. Over time, the scope of the role is expected to broaden as the team and product mature.

Requirements

  • 4+ years of experience in machine learning, systems, performance engineering, or adjacent applied engineering work.
  • Strong programming skills in Python and comfort working in production engineering environments.
  • Experience running empirical evaluations, benchmarks, or experiments and translating results into concrete engineering decisions.
  • Familiarity with LLM inference systems and tools such as vLLM, SGLang, TensorRT-LLM, or similar model-serving stacks.
  • Understanding of the practical tradeoffs involved in latency, throughput, batching, GPU utilization, quantization, and quality regression analysis.
  • Ability to work across model, systems, and product boundaries and stay focused on outcomes that matter for customers.
  • Strong written communication and a bias toward making technical work legible and reproducible for others.

Nice To Haves

  • Experience optimizing inference workloads on modern GPU hardware.
  • Experience with profiling tools such as Nsight Systems, PyTorch profilers, or custom telemetry pipelines.
  • Familiarity with benchmark suites and evaluation frameworks for coding, reasoning, or agent workloads.
  • Experience using real production traces or customer traffic patterns to guide optimization work.
  • Experience balancing model quality and serving performance when evaluating quantization, speculative decoding, or other acceleration strategies.

Responsibilities

  • Build and maintain benchmarking workflows that measure latency, throughput, quality regressions, and cost across priority models and serving configurations.
  • Benchmark our inference stack against realistic customer workloads and external provider baselines to identify performance gaps and improvement opportunities.
  • Profile model-serving behavior across frameworks, runtimes, and hardware configurations to find bottlenecks in prefill, decode, KV cache usage, batching, graph capture, quantization, and related systems.
  • Drive targeted optimization efforts for specific customer and product workloads, including tuning serving configurations, evaluating runtime features, and validating changes against representative traces and benchmarks.
  • Design and run experiments on model-serving techniques such as quantization, speculative decoding, caching strategies, routing, and other inference optimizations, with careful attention to quality and correctness tradeoffs.
  • Partner closely with inference platform engineers to productionize improvements and establish repeatable workflows for performance testing and regression detection.
  • Produce clear technical writeups and recommendations that help the team make better decisions about model configurations, runtime choices, hardware allocation, and customer-specific deployment strategies.
  • Contribute additional applied research over time as needed to support inference quality, optimization, and product performance goals.

Benefits

  • Medical, dental, and vision insurance - 100% paid for by CoreWeave
  • Company-paid Life Insurance
  • Voluntary supplemental life insurance
  • Short and long-term disability insurance
  • Flexible Spending Account
  • Health Savings Account
  • Tuition Reimbursement
  • Ability to Participate in Employee Stock Purchase Program (ESPP)
  • Mental Wellness Benefits through Spring Health
  • Family-Forming support provided by Carrot
  • Paid Parental Leave
  • Flexible, full-service childcare support with Kinside
  • 401(k) with a generous employer match
  • Flexible PTO
  • Catered lunch each day in our office and data center locations
  • A casual work environment
  • A work culture focused on innovative disruption

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

101-250 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service