Senior HPC Operations Engineer

LambdaCalifornia, PA
52dHybrid

About The Position

Lambda, The Superintelligence Cloud, builds Gigawatt-scale AI Factories for Training and Inference. Lambda’s mission is to make compute as ubiquitous as electricity and give every person access to artificial intelligence. One person, one GPU. If you'd like to build the world's best deep learning cloud, join us. Note: This position requires presence in our San Francisco/San Jose or Seattle office location 4 days per week; Lambda’s designated work from home day is currently Tuesday. Engineering at Lambda is responsible for building and scaling our cloud offering. Our scope includes the Lambda website, cloud APIs and systems as well as internal tooling for system deployment, management and maintenance.

Requirements

  • Are a deeply experienced HPC engineer comfortable with logical provisioning of a cluster
  • Have a strong understanding of HPC/AI architecture, operating systems, firmware, software, and networking
  • 10+ years of experience in deploying and configuring HPC clusters for AI workloads
  • Have an innate attention to detail
  • Have experience with Bright Cluster Manager or similar cluster management tools
  • Are in expert in configuring and troubleshooting: SFP+ fiber, Infiniband (IB), and 100 GbE network fabrics Ethernet, switching, power infrastructure, GPU direct, RDMA, NCCL, Horovod environments Linux based compute nodes, firmware updates, driver installation SLURM, Kubernetes, or other job scheduling systems
  • Work well under deadlines and structured project plans also knowing when and how to ask for changes to project timelines
  • Have excellent problem solving and troubleshooting skills
  • Have flexibility to travel to our North American data centers as on-site needs arise or as part of training exercises
  • Are able to work independently and as part of a team
  • Are comfortable mentoring and supporting junior HPC engineers on cluster deployments

Nice To Haves

  • Experience with machine learning and deep learning frameworks (PyTorch, Tensorflow) and benchmarking tools (DeepSpeed, MLPerf)
  • Experience with containerization technologies ( Docker, Kubernetes)
  • Experience working with the technologies that underpin our cloud business ( GPU acceleration, virtualization, and cloud computing)
  • Keen situational awareness in customer situations, employing diplomacy and tact
  • Bachelors degree in EE, CS, Physics, Mathematics, or equivalent work experience

Responsibilities

  • Remotely deploy and configure large-scale HPC clusters for AI workloads (up to many thousands of nodes)
  • Remotely install and configure operating systems, firmware, software, and networking on HPC clusters both manually and using automation tools
  • Troubleshoot and resolve HPC cluster issues working closely with physical deployment teams on-site
  • Provide clear and detailed requirements back to other engineering teams on gaps and improvement areas, specifically in the areas of simplification, stability, and operational efficiency
  • Contribute to the creation of and maintenance of Standard Operating Procedures
  • Provide regular and well-communicated updates to project leads throughout each deployment
  • Mentor and assist less experienced team members
  • Stay up-to-date on the latest HPC/AI technologies and best practices

Benefits

  • We offer generous cash & equity compensation
  • Health, dental, and vision coverage for you and your dependents
  • Wellness and Commuter stipends for select roles
  • 401k Plan with 2% company match (USA employees)
  • Flexible Paid Time Off Plan that we all actually use
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service