Infrastructure Engineer (Storage)

Lightning AISeattle, WA
Hybrid

About The Position

Lightning AI is seeking a Senior Storage Infrastructure Engineer to join their Infrastructure Engineering team. In this role, you will focus on building and operating the storage systems that power large-scale AI/ML training, inference, and HPC workloads. You will work at the intersection of software, hardware, and operations—developing automation, improving reliability, and scaling distributed storage systems across their bare-metal infrastructure. You will help own the data plane of their storage infrastructure, supporting high-throughput, low-latency data access for some of the most demanding AI workloads. You’ll play a key role in managing and evolving their storage stack (including VAST and S3-compatible systems like Ceph), ensuring performance, reliability, and efficiency at scale. This role can work hybrid out of one of their US-based hubs (Seattle, NYC, or SF) or fully remote within the U.S., with occasional company and team offsites. Visa sponsorship is not available for this position.

Requirements

  • 5+ years of experience in infrastructure engineering, systems engineering, or related roles
  • Hands-on experience operating distributed storage systems (e.g., VAST, Ceph, or similar)
  • Strong Linux systems experience in production environments
  • Proficiency in Python or similar scripting/programming languages for automation
  • Experience working with bare-metal infrastructure and hardware-oriented systems
  • Ability to debug complex issues across system boundaries (storage, OS, hardware, networking)
  • Experience with storage networking protocols (e.g., NFS or similar)
  • Experience with capacity planning, monitoring, and performance tuning

Nice To Haves

  • Experience with VAST storage systems in production environments
  • Experience operating S3-compatible object storage at scale
  • Data center operations experience, including working with physical hardware
  • Familiarity with AI/ML or HPC workloads and their storage requirements
  • Background in high-performance or low-latency distributed systems
  • Familiarity with high-performance data transfer technologies (e.g., RDMA, GPU Direct Storage)
  • Experience supporting GPU-based workloads or large-scale compute clusters

Responsibilities

  • Operate and scale distributed storage systems, including VAST and S3-compatible object storage (e.g., Ceph)
  • Improve performance, reliability, and efficiency of storage systems supporting large-scale AI/ML workloads
  • Troubleshoot complex storage and data path issues across hardware and software layers
  • Optimize storage performance to support high-throughput, low-latency AI training and inference workloads
  • Build and maintain automation for provisioning, managing, and monitoring storage infrastructure
  • Develop Python-based tools and workflows to reduce manual operational overhead
  • Improve lifecycle management of storage clusters, from deployment through maintenance and scaling
  • Manage and operate Linux-based systems in production, including bare-metal environments
  • Partner with infrastructure and data center teams on hardware bring-up, upgrades, and issue resolution
  • Support capacity planning, utilization tracking, and forecasting for storage systems
  • Leverage monitoring and telemetry to diagnose issues and improve system performance and reliability
  • Work closely with Infrastructure Engineering, Network Engineering, and Platform teams to integrate storage into the broader platform
  • Contribute to design discussions around new infrastructure deployments and scaling strategies
  • Help define best practices for operating storage systems in high-performance computing environments

Benefits

  • Comprehensive medical, dental and vision coverage (U.S.); Private medical and dental insurance (U.K.)
  • Retirement and financial wellness support (U.S.); Pension contribution (U.K.)
  • Generous paid time off, plus holidays
  • Paid parental leave
  • Professional development support
  • Wellness and work-from-home stipends
  • Flexible work environment
  • Discretionary bonus
  • Meaningful equity component
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service