About The Position

Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field. Where we work Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 800 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team. Customer experience at Nebius AI Cloud involves tackling customers’ challenges and directly impacting their success by solving real-world AI and ML problems at massive GPU cloud scale. You’ll not only resolve issues, but play a key role in shaping clients’ business success by optimizing their AI solutions. Working with advanced GPUs such as H200, B200 and GB200, as well as modern ML frameworks, you’ll influence the development of the Nebius AI Cloud and gain experience at the intersection of infrastructure and AI. With minimal bureaucracy, you’ll have the freedom to innovate, take ownership and drive change. Opportunities for growth are abundant in this vibrant and supportive professional community. We are seeking a Specialist HPC Infrastructure Solutions Architect to design, build, and optimize next-generation high-performance computing (HPC) platforms for AI, simulation, and large-scale data processing workloads. The ideal candidate combines deep knowledge of cloud-native architecture, Kubernetes orchestration, networking, and HPC system design with hands-on experience implementing NVIDIA GPU-based compute environments and MLOps toolchains. This role sits at the intersection of infrastructure engineering, accelerated computing, and AI systems design, shaping the foundation for high-throughput, low-latency distributed workloads in cloud environment. You’re welcome to work remotely from the United States or Canada.

Requirements

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field (Ph.D. a plus)
  • 3+ years of hands-on experience architecting HPC or large-scale GPU clusters.
  • Expertise in Linux systems, Kubernetes, container runtimes (containers, CRI-O, Docker), and related CI/CD practices.
  • Strong understanding of HPC networking protocols and RDMA stacks (InfiniBand, NVLink/NVSwitch)
  • Deep understanding of storage and I/O optimization for large datasets (Ceph, Lustre, NFS, GPUDirect Storage)
  • Familiarity with Terraform, Ansible, Helm, and GitOps workflows.
  • Strong scripting skills in Python or Bash for automation and tool integration.
  • Excellent communication and documentation skills; ability to lead design reviews and customer engagements

Nice To Haves

  • Proficient with NVIDIA GPU ecosystem: GPU Operator, MIG, DCGM, NCCL, Nsight, and CUDA stack management.
  • Experience designing or managing AI/ML pipelines via MLflow, Kubeflow, NeMo, or similar frameworks.
  • Experience with cloud-native HPC offerings (Slurm, LFS, PBS, etc.).
  • Background in designing multi-tenant GPU infrastructures or AI training farms.
  • Exposure to distributed ML frameworks (PyTorch DDP, DeepSpeed, Megatron).
  • Knowledge of observability for HPC (Prometheus, DCGM Exporter, Grafana, NVIDIA NGC monitoring tools)
  • Contribution to open-source HPC/CUDA/Kubernetes projects is a strong plus

Responsibilities

  • Architect and implement scalable HPC clusters optimized for AI, simulation, and distributed training, leveraging container orchestration frameworks and schedulers (e.g., Kubernetes, Slurm).
  • Design and integrate GPU-accelerated compute infrastructures featuring NVIDIA Hopper, Blackwell architectures, NVLink/NVSwitch, and InfiniBand/RoCE Interconnects.
  • Deploy, and manage GPU Operator and Network Operator stacks for automated lifecycle management of GPU and high-speed networking components.
  • Design and validate cloud HPC environments, focusing on low-latency, high-bandwidth networking, multi-GPU scaling, and efficient workload scheduling.
  • Lead reference architectures for AI/ML model training, data pipelines, and MLOps integrations using modern observability and CI/CD tooling.
  • Collaborate with hardware vendors (e.g., NVIDIA) and cloud providers to evaluate and optimize emerging HPC and GPU technologies.
  • Benchmark system performance, identify bottlenecks, and tune resource utilization across compute, network, and storage tiers.
  • Provide expert-level technical guidance to customers, internal teams, and partners on HPC architecture patterns, operational excellence reviews and customer engagements

Benefits

  • Health Insurance: 100% company-paid medical, dental, and vision coverage for employees and families.
  • 401(k) Plan: Up to 4% company match with immediate vesting.
  • Parental Leave: 20 weeks paid for primary caregivers, 12 weeks for secondary caregivers.
  • Remote Work Reimbursement: Up to $85/month for mobile and internet.
  • Disability & Life Insurance: Company-paid short-term, long-term, and life insurance coverage.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service