Applied Research Engineer – Training Infra

Snorkel AISan Francisco, CA
2d$150,000 - $180,000Remote

About The Position

At Snorkel, we believe meaningful AI doesn’t start with the model, it starts with the data. We’re on a mission to help enterprises transform expert knowledge into specialized AI at scale. The AI landscape has gone through incredible changes between 2015, when Snorkel started as a research project in the Stanford AI Lab, to the generative AI breakthroughs of today. But one thing has remained constant: the data you use to build AI is the key to achieving differentiation, high performance, and production-ready systems. We work with some of the world’s largest organizations to empower scientists, engineers, financial experts, product creators, journalists, and more to build custom AI with their data faster than ever before. Excited to help us redefine how AI is built? Apply to be the newest Snorkeler! ABOUT SNORKEL At Snorkel, we believe meaningful AI doesn’t start with the model—it starts with the data. We’re on a mission to help enterprises transform expert knowledge into specialized AI at scale. The AI landscape has gone through incredible changes between 2015, when Snorkel started as a research project in the Stanford AI Lab, to the generative AI breakthroughs of today. But one thing has remained constant: the data you use to build AI is the key to achieving differentiation, high performance, and production-ready systems. We work with some of the world’s largest organizations to empower scientists, engineers, financial experts, product creators, journalists, and more to build custom AI with their data faster than ever before. Excited to help us redefine how AI is built? Apply to be the newest Snorkeler! THE ROLE As an Applied Research Engineer at Snorkel AI, you will own the infrastructure that powers our model training and evaluation work. This is a hands-on role where you will build and operate GPU cluster infrastructure, training pipelines, and the tooling that allows our research and engineering teams to run experiments reliably and at scale. You will work closely with research scientists and engineers, translating training requirements into robust, reproducible systems—and proactively removing infrastructure blockers before they slow down the work that matters most. Snorkel AI operates in a fast-paced, high-impact environment. We are looking for someone who takes pride in operational excellence, loves solving complex distributed systems problems, and thrives when given real ownership. Location: Redwood City or San Francisco — OR REMOTE

Requirements

  • Hands-on experience managing GPU clusters on major cloud providers, including provisioning, network configuration, and cost management.
  • Experience with distributed compute orchestration tools such as Kubernetes, Slurm, or equivalent cluster management systems.
  • Working knowledge of distributed training concepts: parallelism strategies, memory optimization techniques, and inter-node communication.
  • Experience with setting up, managing, and integrating ML experiment tracking and data/model versioning tools..
  • Strong Python proficiency and solid software engineering fundamentals such as version control, modular design, and automation.
  • Ability to work in a fast-moving, iterative environment and take end-to-end ownership of ambiguous infrastructure problems.

Nice To Haves

  • Hands-on experience with post-training workflows such as supervised fine-tuning (SFT) or reinforcement learning (RLHF, GRPO, or similar) is a strong plus, but not required.

Responsibilities

  • Set up and manage GPU cluster infrastructure on major cloud providers (e.g., AWS HyperPod) for distributed model training, including networking, provisioning, and cost tracking.
  • Build and operate job orchestration and scheduling systems (e.g., Kubernetes, Slurm, or cloud-native equivalents) to reliably launch and manage training, rollout, and evaluation jobs across multi-node clusters.
  • Integrate and maintain ML training frameworks and post-training pipelines, ensuring they run stably and reproducibly at scale.
  • Set up and maintain experiment tracking, dataset versioning, and model artifact management to support fast iteration.
  • Monitor and optimize cluster health, inter-node communication, and resource utilization; implement fault tolerance and auto-recovery so long-running jobs survive node failures.
  • Work closely with research scientists and ML engineers to understand requirements, unblock experiments, and evolve infrastructure as our training workloads needs change.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service