Forward Deployed Engineer (Inference & Post-Training)

Together AISan Francisco, CA
Remote

About The Position

As a Forward Deployed Engineer (FDE) focused on Inference & Post-Training, you will be a hands-on technical partner to our most strategic customers — production AI teams looking to leverage high quality models and do inference at scale. For us, FDE is not a replacement for a Solutions Architect; you will partner with our SAs as a deep-domain specialist in inference optimization, fine-tuning pipelines, and production deployment. As key contributors to both the CX, Engineering, and Sales organizations, FDEs add tremendous value by ensuring we can meet the requirements of our most complex POCs, facilitate successful platform adoption, and guide tailored optimization efforts — directly impacting customer success, company growth, and the hardening of our core platform.

Requirements

  • 5+ years in a technical role, with a strong focus on inference systems, open-source LLM deployment, or post-training workflows.
  • Expert-level, hands-on experience with inference engines (e.g., vLLM, TensorRT-LLM, SGLang); ability to diagnose and resolve performance issues at the engine level.
  • Deep knowledge of KV cache tuning, speculative decoding, tensor parallelism, pipeline parallelism, and quantization techniques
  • Hands-on experience with fine-tuning and post-training pipelines, including LoRA, SFT, DPO, RLHF, and GRPO; ability to advise on system design
  • Broad knowledge of state-of-the-art open-source models and strong judgment on model selection for specific customer use cases, hardware profiles, and performance targets.
  • Strong Python skills; comfortable working in production environments

Responsibilities

  • Select, configure, and optimize inference engine based on hardware, model architecture, and workload profile
  • Develop configuration updates to win critical POCs, benchmarks, and optimize customer deployments; tune KV cache, apply speculative decoding, determine optimal tensor parallelism, and determine quantization strategy to hit throughput and latency targets.
  • Drive hands-on RL training runs and optimize system design; guide customers through LoRA, SFT, DPO, RLHF, and GRPO pipelines from experimentation through production.
  • Act as the primary technical point of contact for aligned strategic accounts — monitoring and optimizing endpoint configurations, helping customers get the most out of the platform, and collaborating to ensure we hit critical milestones.
  • Establish direct alignment with strategic customers at onboarding; ensure the right inference and post-training configurations are in place from day one to improve time-to-value.
  • Directly influence our software and model roadmap by surfacing insights from the field. Contribute back to the product where needed to support customer requirements or drive a better experience. Drive early feature and research adoption with strategic logos.

Benefits

  • competitive compensation
  • startup equity
  • health insurance
  • other benefits
  • flexibility in terms of remote work
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service