Member of Technical Staff - GPU Infrastructure

Prime IntellectSan Francisco, CA
115d

About The Position

At Prime Intellect, we're enabling the next generation of AI breakthroughs by helping our customers deploy and optimize massive GPU clusters. As our Solutions Architect for GPU Infrastructure, you'll be the technical expert who transforms customer requirements into production-ready systems capable of training the world's most advanced AI models. We recently raised $15mm in funding (total of $20mm raised) led by Founders Fund, with participation from Menlo Ventures and prominent angels including Andrej Karpathy (Eureka AI, Tesla, OpenAI), Tri Dao (Chief Scientific Officer of Together AI), Dylan Patel (SemiAnalysis), Clem Delangue (Huggingface), Emad Mostaque (Stability AI) and many others.

Requirements

  • 3+ years hands-on experience with GPU clusters and HPC environments
  • Deep expertise with SLURM and Kubernetes in production GPU settings
  • Proven experience with InfiniBand configuration and troubleshooting
  • Strong understanding of NVIDIA GPU architecture, CUDA ecosystem, and driver stack
  • Experience with infrastructure automation tools (Ansible, Terraform)
  • Proficiency in Python, Bash, and systems programming
  • Track record of customer-facing technical leadership
  • NVIDIA driver installation and troubleshooting (CUDA, Fabric Manager, DCGM)
  • Container runtime configuration for GPUs (Docker, Containerd, Enroot)
  • Linux kernel tuning and performance optimization
  • Network topology design for AI workloads
  • Power and cooling requirements for high-density GPU deployments

Nice To Haves

  • Experience with 1000+ GPU deployments
  • NVIDIA DGX, HGX, or SuperPOD certification
  • Distributed training frameworks (PyTorch FSDP, DeepSpeed, Megatron-LM)
  • ML framework optimization and profiling
  • Experience with AMD MI300 or Intel Gaudi accelerators
  • Contributions to open-source HPC/AI infrastructure projects

Responsibilities

  • Partner with clients to understand workload requirements and design optimal GPU cluster architectures
  • Create technical proposals and capacity planning for clusters ranging from 100 to 10,000+ GPUs
  • Develop deployment strategies for LLM training, inference, and HPC workloads
  • Present architectural recommendations to technical and executive stakeholders
  • Deploy and configure orchestration systems including SLURM and Kubernetes for distributed workloads
  • Implement high-performance networking with InfiniBand, RoCE, and NVLink interconnects
  • Optimize GPU utilization, memory management, and inter-node communication
  • Configure parallel filesystems (Lustre, BeeGFS, GPFS) for optimal I/O performance
  • Tune system performance from kernel parameters to CUDA configurations
  • Serve as primary technical escalation point for customer infrastructure issues
  • Diagnose and resolve complex problems across the full stack - hardware, drivers, networking, and software
  • Implement monitoring, alerting, and automated remediation systems
  • Provide 24/7 on-call support for critical customer deployments
  • Create runbooks and documentation for customer operations teams
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service