About The Position

Armada is an edge computing startup that provides computing infrastructure to remote areas where connectivity and cloud infrastructure is limited, as well as areas where data needs to be processed locally for real-time analytics and AI at the edge. We’re looking to bring on the most brilliant minds to help further our mission of bridging the digital divide with advanced technology infrastructure that can be rapidly deployed anywhere. Armada is seeking a visionary VP of Customer Engineering to lead a world-class, globally distributed team of Customer Engineers at the forefront of AI infrastructure and edge computing. This is a pivotal leadership role for a builder and operator who thrives at the intersection of cutting-edge AI technology and large-scale industrial deployment. As Armada accelerates adoption of its AI-powered edge platform spanning ruggedized modular data centers, GPU-accelerated inference, and real-time edge AI this leader will shape how we engage with customers globally: from initial technical discovery through validated, deployment-ready architectures. You will own the pre-sales technical lifecycle across all regions, ensuring our Customer Engineers operate with rigor, speed, and clarity in North America, EMEA, APAC, and emerging markets. The CE function guides customers from mission-critical AI ambitions and complex operational environments to scalable, field-proven Armada solutions taking each opportunity 80% of the way by scoping requirements, framing AI infrastructure trade-offs, validating feasibility, and ensuring full qualification before moving to detailed engineering.

Requirements

  • Bachelor's degree in Computer Science, Electrical Engineering, Systems Engineering, or equivalent technical field.
  • 7+ years leading Customer Engineering or Solutions Architecture teams in pre-sales; demonstrated success hiring and scaling globally.
  • 7–10+ years of hands-on pre-sales or solutions engineering experience in AI infrastructure, edge computing, datacenter, or distributed systems.
  • Deep expertise in GPU and AI accelerator infrastructure: NVIDIA GPU architectures, AI inference frameworks (TensorRT, ONNX, vLLM), and edge AI platforms.
  • Strong grounding in datacenter and edge infrastructure: compute (GPU, bare metal, virtualization), storage (SAN/NAS/Object/NVMe), networking (LAN/WAN/SD-WAN/SATCOM), and facility systems (power, cooling).
  • Hands-on experience with container orchestration (Kubernetes), virtualization (VMware, KVM, Hyper-V), and cloud service models (IaaS, PaaS, hybrid).
  • Proven ability to engage and influence C-level technical and operational leaders across global enterprise and government customers.
  • Willingness to travel internationally, including to remote and operationally austere field sites.

Nice To Haves

  • Experience deploying or architecting AI solutions in oil & gas, defense & intelligence, utilities, telecommunications, or mining verticals.
  • Hands-on exposure to modular, containerized, or mobile data center deployments — including skid-based and rapid-deploy form factors.
  • Familiarity with edge AI inference optimization, model quantization, and deployment frameworks for bandwidth-constrained environments.
  • Background integrating OT/IT convergence — connecting sensors, IIoT devices, and SCADA systems to AI-enabled edge platforms.
  • Experience with satellite and hybrid connectivity architectures (Starlink, LEO, VSAT) for remote AI deployments.
  • International experience building CE teams or managing customer engagements in EMEA, APAC, or Middle East markets.
  • Certifications in AI/ML (e.g., NVIDIA DLI), cloud infrastructure (AWS, Azure, GCP), or datacenter design (CDCP, DCDC, RCDD).
  • Experience collaborating with construction, facilities, and deployment partners on large-scale infrastructure projects.

Responsibilities

  • Build & Scale a Global Customer Engineering Organization
  • Lead, coach, and develop a globally distributed team of Customer Engineers spanning North America, EMEA, and emerging markets.
  • Define and execute a global hiring strategy: build CE presence in new regions, establish operating rhythms, onboard early hires, and set standards for technical excellence worldwide.
  • Create talent development pathways that grow CEs into senior AI infrastructure architects and future leaders.
  • Build a culture of continuous learning around AI infrastructure, edge computing, and real-world deployment at scale.
  • Drive AI-Focused Technical Discovery & Solution Architecture
  • Champion a rigorous, AI-first discovery methodology guiding CEs to uncover customer mission goals, AI workload requirements, data sovereignty constraints, and connectivity realities across diverse global environments.
  • Ensure the team consistently translates complex, distributed AI environments into validated edge architectures built around Armada's Galleon modular data centers, Atlas platform, and GPU-accelerated edge AI stack.
  • Define and govern solution design standards for AI inference, real-time analytics, and edge ML pipelines in bandwidth-constrained and disconnected environments.
  • Elevate Global Pre-Sales Technical Quality
  • Set and raise the bar on discovery outputs, AI architecture designs, technical narratives, demo environments, and proof-of-value success criteria worldwide.
  • Standardize technical qualification frameworks ensuring AI infrastructure opportunities are well-scoped, feasible, and commercially validated before deep engineering engagement.
  • Develop a global review cadence and peer architecture process to maintain consistency and quality across all regions.
  • Partner Cross-Functionally to Accelerate Global Revenue
  • Collaborate tightly with regional Sales leaders, Product, Engineering, and Global Deployment teams to align on AI infrastructure positioning, competitive differentiation, and customer roadmaps.
  • Bridge technical architectures to measurable customer outcomes articulating ROI, operational efficiency, and AI-driven value creation across energy, defense, telecommunications, and industrial verticals.
  • Synthesize global customer insights to inform Armada's AI product roadmap, hardware evolution, and platform strategy.
  • Build Scalable AI Infrastructure Methodologies & Playbooks
  • Develop globally consistent reference architectures for AI inference at the edge, GPU cluster deployments, satellite-connected operations, and hybrid cloud-edge patterns.
  • Create repeatable frameworks for AI proof-of-value pilots, technical discovery, and competitive positioning across Armada's key verticals.
  • Enable regional CE teams with localized deployment guides, regulatory considerations, and partner ecosystem alignment — ensuring global consistency while preserving local agility.

Benefits

  • Medical, dental, and vision (subsidized cost)
  • Health savings accounts (HSA), flexible spending accounts (FSA), and dependent care FSAs (DCFSA)
  • Retirement plan options, including 401(k) and Roth 401(k)
  • Unlimited paid time off (PTO)
  • 15 paid company holidays per year
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service