About The Position

Kraken is building a dedicated AI Compute and Infrastructure team to power the next generation of model training, inference, evaluation, and experimentation across the exchange. This team sits within engineering leadership and owns the infrastructure layer that lets Kraken run AI workloads with control, speed, reliability, and cost discipline. The team is responsible for GPU and accelerator infrastructure, cluster operations, scheduling, model serving, observability, capacity planning, and cost-efficient compute at scale. This is the backbone that allows Kraken to train, serve, evaluate, and iterate on AI systems in-house where it matters for privacy, latency, reliability, cost, or product differentiation. You will join a small, senior, high-impact team working directly with AI/ML researchers, platform engineers, security teams, and product teams. The mandate is simple: make Kraken's AI ambitions real by building compute infrastructure that is fast, dependable, efficient, and production-grade.

Requirements

  • 5+ years of infrastructure engineering experience, with significant time spent on GPU compute, ML infrastructure, distributed systems, high-performance computing, or large-scale production platforms.
  • Hands-on experience operating GPU clusters or accelerator-backed infrastructure in production or production-like environments, including scheduling, orchestration, utilization monitoring, and cost optimization.
  • Strong systems engineering fundamentals across Linux, networking, storage, containers, Kubernetes, distributed runtimes, and production debugging.
  • Experience with ML serving frameworks such as vLLM, Triton Inference Server, TensorRT, TorchServe, KServe, Ray Serve, or equivalent systems.
  • Proficiency in Python for infrastructure automation, tooling, debugging, integration, and operational workflows.
  • Practical understanding of performance tradeoffs across batching, concurrency, memory usage, GPU utilization, model size, latency, throughput, availability, and cost.
  • Track record of optimizing compute costs while maintaining clear performance, reliability, and availability expectations.
  • Experience building observable systems with useful metrics, logs, traces, dashboards, alerts, and incident workflows.
  • Comfortable working in high-stakes, always-on environments where uptime, throughput, correctness, and operational discipline are critical.
  • Clear communicator who can translate infrastructure tradeoffs for researchers, product teams, platform engineers, security stakeholders, and engineering leadership.

Nice To Haves

  • Experience at a frontier AI lab, hyperscaler, high-frequency trading firm, research platform, or high-scale ML organization.
  • Familiarity with custom silicon or specialized accelerators such as TPUs, AWS Trainium, Gaudi, or similar platforms.
  • Background in capacity planning, procurement input, reserved capacity strategy, cloud accelerator economics, or GPU fleet cost management.
  • Experience with distributed training frameworks such as DeepSpeed, Megatron-LM, FSDP, Ray, or equivalent systems.
  • Experience debugging CUDA, NCCL, kernel, driver, runtime, memory, networking, or low-level performance issues.
  • Experience with Rust, C++, Go, CUDA, or other systems languages used for performance-critical infrastructure.
  • Crypto, financial services, trading infrastructure, or security-sensitive production infrastructure experience.

Responsibilities

  • Own and operate GPU and accelerator clusters used for training, inference, evaluation, and experimentation, including drivers, runtimes, kernels, device plugins, node configuration, scheduling primitives, and workload isolation.
  • Design infrastructure that enables Kraken teams to run models locally on GPUs where it is strategically and economically preferable, reducing unnecessary dependency on external providers and containing compute costs.
  • Build and improve scheduling, orchestration, placement, quota management, and utilization systems across heterogeneous accelerator environments.
  • Optimize inference pipelines for latency, throughput, reliability, memory efficiency, and cost using frameworks such as vLLM, Triton Inference Server, TensorRT, or equivalent serving stacks.
  • Partner with ML engineers and researchers to remove bottlenecks in training, evaluation, batch inference, online inference, deployment, and production debugging workflows.
  • Build observability for GPU utilization, memory pressure, queue depth, saturation, token throughput, request latency, failed workloads, capacity pressure, and spend.
  • Drive reliability, incident response, alerting, runbooks, and post-incident improvements for always-on AI compute infrastructure.
  • Evaluate and integrate new hardware, cloud instance families, specialized accelerators, runtimes, schedulers, and serving frameworks as the AI infrastructure landscape evolves.
  • Build tooling that makes GPU usage visible, accountable, and easier for internal teams to consume without needing to become infrastructure experts.
  • Contribute to long-term architecture decisions that balance performance, cost efficiency, scalability, operational simplicity, and production safety.

Benefits

  • Fully remote company
  • Krakenites in 70+ countries
  • Speak over 50 languages
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service