Forward Deployed Engineer (GPU Clusters)

Together AISan Francisco, CA
Remote

About The Position

As a Forward Deployed Engineer (FDE) focused on large scale GPU clusters, you will be a hands-on technical partner to our strategic customers – the world’s leading AI model builders. You will partner with our SAs as a deep-domain specialist in large-scale infrastructure, storage, high-performance networking, and cluster orchestration. As key contributors to the CX, Engineering, and Sales organizations, FDEs add tremendous value by ensuring we can meet the requirements of our most complex POCs, facilitate successful platform adoption for our strategic customers, and guide tailored optimization efforts - directly impacting company growth and the hardening of our core platform.

Requirements

  • 5+ years in a technical role, with a strong focus on Large-Scale GPU Infrastructure.
  • Deep, hands-on experience with Kubernetes (specifically GPU-operator and device plugins) and/or SLURM for workload scheduling.
  • Expert knowledge of InfiniBand, RoCE, and NVLink; ability to diagnose network failures that degrade collective communication (NCCL).
  • Familiarity with parallel file systems (VAST or Weka preferred) and object storage, specifically in the context of large-scale checkpointing.
  • Ability to run and interpret training benchmarks and communication tests to validate cluster health and performance.
  • Proficiency in Python and shell scripting; experience with Ansible or similar tools for automated cluster configuration.
  • Willingness to dive into the customer's stack to solve hard problems and comfortable with the high-stakes, fast-paced environment of frontier model labs.

Responsibilities

  • Cluster Hardening & Validation: Design and execute rigorous pre-handover test suites (NCCL, DCGM, GPU Burn) to ensure clusters are stable under the extreme stress of multi-node training.
  • Technical Partnership: Act as the primary technical point of contact for model labs, helping them tune their orchestration layer (Kubernetes or SLURM) for maximum throughput.
  • Infrastructure Optimization: Profile and debug low-level bottlenecks in InfiniBand (IB) fabrics, NVLink topologies, and high-performance storage systems.
  • Opinionated Onboarding: Build reference designs and "out-of-the-box" configurations for training frameworks to reduce customer time-to-train.
  • Benchmarking & Migration: Lead complex benchmarking exercises to demonstrate the performance impact of migrating to new hardware families or Together AI’s optimized infrastructure.
  • Product Feedback Loop: Directly influence our hardware and software roadmap by surfacing edge cases and performance gaps found during customer deployments.

Benefits

  • health insurance
  • startup equity
  • flexibility in terms of remote work

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

No Education Listed

Number of Employees

1-10 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service