About The Position

The vLLM and LLM-D Engineering team at Red Hat is looking for a customer obsessed developer to join our team as a Forward Deployed Engineer . In this role, you will not just build software; you will be the bridge between our cutting-edge inference platform ( LLM-D , and vLLM ) and our customers' most critical production environments. You will interface directly with the engineering teams at our customer to deploy, optimize, and scale distributed Large Language Model (LLM) inference systems. You will solve " last mile " infrastructure challenges that defy off-the-shelf solutions, ensuring that massive models run with low latency and high throughput on complex Kubernetes clusters. This is not a sales engineering role, you will be part of the core vLLM and LLM-D engineering team. What You Will Do Orchestrate Distributed Inference : Deploy and configure LLM-D and vLLM on Kubernetes clusters. You will set up and configure advanced deployment like disaggregated serving , KV-cache aware routing, KV Cache offloading etc to maximize hardware utilization. Optimize for Production : Go beyond standard deployments by running performance benchmarks , tuning vLLM parameters, and configuring intelligent inference routing policies to meet SLOs for latency and throughput. You care about Time Per Output Token (TPOT) , GPU utilization , GPU networking optimizations , and Kubernetes scheduler efficiency . Code Side-by-Side : Work directly with customer engineers to write production-quality code (Python/Go/YAML) that integrates our inference engine into their existing Kubernetes ecosystem. Solve the "Unsolvable" : Debug complex interaction effects between specific model architectures (e.g., MoE, large context windows), hardware accelerators (NVIDIA GPUs, AMD GPUs, TPUs), and Kubernetes networking (Envoy/ISTIO). Feedback Loop : Act as the "Customer Zero" for our core engineering teams. You will channel field learnings back to product development, influencing the roadmap for LLM-D and vLLM features. Travel only as needed to customers to present, demo, or help execute proof-of-concepts. What You Will Bring 8+ Years of Engineering Experience : You have a decade-long track record in Backend Systems, SRE, or Infrastructure Engineering . Customer Fluency : You speak both "Systems Engineering" and "Business Value". Bias for Action: You prefer rapid prototyping and iteration over theoretical perfection. You are comfortable operating in ambiguity and taking ownership of the outcome. Deep Kubernetes Expertise : You are fluent in K8s primitives, from defining custom resources (CRDs, Operators, Controllers) to configuring modern ingress via the Gateway API . You have deep experience with stateful workloads and high-performance networking, including the ability to tune scheduler logic (affinity/tolerations) for GPU workloads and troubleshoot complex CNI failures. AI Inference Proficiency : You understand how a LLM forward pass works. You know what KV Caching is, why prefill/decode disaggregation matters, why context length impacts performance, and how continuous batching works in vLLM . Systems Programming : Proficiency in Python (for model interfaces) and Go (for Kubernetes controllers/scheduler logic). Infrastructure as Code : Experience with Helm , Terraform, or similar tools for reproducible deployments. Cloud & GPU Hardware Fluency : You are comfortable spinning up clusters and deploying LLMs on bare-metal and hyperscaler Kubernetes clusters

Requirements

  • 8+ Years of Engineering Experience : You have a decade-long track record in Backend Systems, SRE, or Infrastructure Engineering .
  • Customer Fluency : You speak both "Systems Engineering" and "Business Value".
  • Bias for Action: You prefer rapid prototyping and iteration over theoretical perfection. You are comfortable operating in ambiguity and taking ownership of the outcome.
  • Deep Kubernetes Expertise : You are fluent in K8s primitives, from defining custom resources (CRDs, Operators, Controllers) to configuring modern ingress via the Gateway API . You have deep experience with stateful workloads and high-performance networking, including the ability to tune scheduler logic (affinity/tolerations) for GPU workloads and troubleshoot complex CNI failures.
  • AI Inference Proficiency : You understand how a LLM forward pass works. You know what KV Caching is, why prefill/decode disaggregation matters, why context length impacts performance, and how continuous batching works in vLLM .
  • Systems Programming : Proficiency in Python (for model interfaces) and Go (for Kubernetes controllers/scheduler logic).
  • Infrastructure as Code : Experience with Helm , Terraform, or similar tools for reproducible deployments.
  • Cloud & GPU Hardware Fluency : You are comfortable spinning up clusters and deploying LLMs on bare-metal and hyperscaler Kubernetes clusters

Nice To Haves

  • Experience contributing to open-source AI infrastructure projects (e.g., KServe, vLLM, Kubernetes).
  • Knowledge of Envoy Proxy or Inference Gateway (IGW).
  • Familiarity with model optimization techniques like Quantization (AWQ, GPTQ) and Speculative Decoding .

Responsibilities

  • Orchestrate Distributed Inference : Deploy and configure LLM-D and vLLM on Kubernetes clusters. You will set up and configure advanced deployment like disaggregated serving , KV-cache aware routing, KV Cache offloading etc to maximize hardware utilization.
  • Optimize for Production : Go beyond standard deployments by running performance benchmarks , tuning vLLM parameters, and configuring intelligent inference routing policies to meet SLOs for latency and throughput. You care about Time Per Output Token (TPOT) , GPU utilization , GPU networking optimizations , and Kubernetes scheduler efficiency .
  • Code Side-by-Side : Work directly with customer engineers to write production-quality code (Python/Go/YAML) that integrates our inference engine into their existing Kubernetes ecosystem.
  • Solve the "Unsolvable" : Debug complex interaction effects between specific model architectures (e.g., MoE, large context windows), hardware accelerators (NVIDIA GPUs, AMD GPUs, TPUs), and Kubernetes networking (Envoy/ISTIO).
  • Feedback Loop : Act as the "Customer Zero" for our core engineering teams. You will channel field learnings back to product development, influencing the roadmap for LLM-D and vLLM features.
  • Travel only as needed to customers to present, demo, or help execute proof-of-concepts.

Benefits

  • Comprehensive medical, dental, and vision coverage
  • Flexible Spending Account - healthcare and dependent care
  • Health Savings Account - high deductible medical plan
  • Retirement 401(k) with employer match
  • Paid time off and holidays
  • Paid parental leave plans for all new parents
  • Leave benefits including disability, paid family medical leave, and paid military leave
  • Additional benefits including employee stock purchase plan, family planning reimbursement, tuition reimbursement, transportation expense account, employee assistance program, and more!

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service