Principal Architect – HPC & AI (NVidia Ecosystem)

World Wide Technology Healthcare SolutionsJenks, OK
20d$215,000 - $245,000Remote

About The Position

The Principal Architect leads HPC AI focused Professional Services delivery engagements and cross functional technical teams on customer programs or projects. They are responsible for technical communications with WWT Engineers, Architects, and the customer for AI-driven projects. The Principal Architect may participate in several Customer projects concurrently, integrating AI solutions with enterprise IT systems. Role Summary The Principal Architect will be at the epicenter of the AI revolution, working with the most advanced hardware on the planet. Whether you're helping a research facility unlock new scientific breakthroughs or an enterprise to build its first private AI cloud, your fingerprints will be on the infrastructure that defines the next decade of technology. The right person for the job is a senior individual contributor responsible for designing, implementing, and optimizing large-scale High-Performance Computing and AI platforms centered on the NVIDIA data center ecosystem. This role operates in a hybrid capacity, combining hands-on technical architecture with selective customer-facing advisory responsibilities. The architect serves as a technical authority across GPU-accelerated compute, high-performance networking, and modern parallel storage platforms, influencing architectural standards and delivery outcomes while ensuring successful, on-time, and on-budget customer deployments without escalations. This is a remote work from home position, with an average travel expectation of approximately 10%, and a willingness for additional travel during peak project phases or critical customer engagements.

Requirements

  • Expert level with deep architectural knowledge of NVIDIA data center platforms, including HGX and DGX platforms.
  • GPU-accelerated compute architecture for AI and HPC workloads.
  • High-performance networking architectures, especially with Spectrum-X.
  • Large-scale AI factory and HPC platform design.
  • Hands-on architectural experience with high-performance parallel or scale-out storage systems.
  • Deep understanding of storage performance characteristics relevant to AI and HPC workloads, including bandwidth, IOPS, latency, and metadata scaling.
  • Proven experience integrating storage platforms such as VAST Data, Netapp, WEKA, DDN, or Lustre into GPU-accelerated environments.
  • NVIDIA Base Command Manager (BCM) for cluster lifecycle management and operations.
  • Slurm for HPC workload scheduling and resource management.
  • Run:AI for GPU orchestration and multi-tenant AI workload optimization.
  • Kubernetes administration including deploying and managing GPU-accelerated AI and HPC workloads.
  • Linux systems administration in large-scale, performance-sensitive environments.
  • Containerized AI workflows and their interaction with schedulers and storage systems.
  • Bachelor’s degree in a technical field or equivalent hands-on experience architecting large scale HPC or AI systems.on experience architecting large scale HPC or AI systems.
  • Experience: 10+ years in HPC, Data Center Architecture, and/or Systems Engineering.
  • Bare Metal Focus: A fundamental preference for, and understanding of, on-premises hardware constraints (power, cooling, cabling).
  • Proven experience as a Senior, or Lead Architect or equivalent experience in AI projects.

Nice To Haves

  • Advanced degree (MS/PhD) in relevant fields is a plus but not required.
  • Experience optimizing existing HPC or AI platforms for performance, utilization, and cost efficiency.
  • Prior experience with multi-site, air-gapped, or regulated environments is beneficial but not required.
  • Experience with liquid cooling, power/cooling design, and data center integration strongly preferred.

Responsibilities

  • Lead the end-to-end architecture of GPU-accelerated HPC and AI platforms, including greenfield AI factory designs and optimization of existing HPC environments.
  • Architect integrated solutions spanning Compute, Networking, and Storage using NVIDIA HGX and DGX platforms, Grace CPU architectures, Spectrum-X networking, and high-performance parallel storage systems.
  • Design storage architectures optimized for AI training, inference, and HPC workloads, balancing performance, scalability, resiliency, and cost.
  • Define reference architectures, design patterns, and best practices for repeatable and supportable customer deployments.
  • Provide hands-on technical leadership during implementation phases, including cluster bring-up, performance tuning, and workload optimization.
  • Architect and integrate workload orchestration and scheduling platforms using NVIDIA Base Command Manager, Slurm, Kubernetes and Run:AI.
  • Optimize end-to-end data pipelines, including GPU utilization, storage throughput, metadata performance, and job scheduling efficiency.
  • Troubleshoot performance bottlenecks across Compute, Networking, and Storage.
  • Design and validate high-performance storage solutions using modern parallel and scale-out storage platforms.
  • Architect storage solutions that support demanding AI and HPC workloads, including high-throughput training pipelines, checkpointing, and large-scale shared datasets.
  • Collaborate with compute and networking design to ensure balanced, bottleneck-free architectures.
  • Act as a senior technical authority for HPC and AI architecture across internal teams and customer engagements.
  • Participate selectively in customer-facing discussions to validate architecture and delivery plans, with a primary focus on design integrity and execution rather than pre-sales.
  • Influence platform standards, architectural direction, and technical decision-making through expertise and demonstrated execution.
  • Identify technical risks early across Compute, Networking, Storage, and orchestration layers, and drive mitigation strategies.
  • Partner with the PMO counterpart to resolve Risks and Issues upon identification and to ensure production-ready, supportable platforms.
  • Ensure staff, contractors, and partners adhere to WWT best practices and templates for AI solution delivery.
  • Review deployment documents, technical assessments, and other outputs to ensure consistency and accuracy, aligning with AI and "One Voice" standards.

Benefits

  • Health and Wellbeing: Health, Dental, and Vision Care, Onsite Health Centers, Employee Assistance Program, Wellness program
  • Financial Benefits: Competitive pay, Profit Sharing, 401k Plan with Company Matching, Life and Disability Insurance, Tuition Reimbursement
  • Paid Time Off: PTO and Sick Leave (starting at 20 days per year) & Holidays (10 per year), Parental Leave, Military Leave, Bereavement
  • Additional Perks: Nursing Mothers Benefits, Voluntary Legal, Pet Insurance, Employee Discount Program
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service