About The Position

We’re looking for a Senior DevOps Engineer to join Hatch’s high-impact engineering team. This is a senior-level role focused on building resilient, secure, and scalable infrastructure to support both our core platform and AI-powered product lines. You'll partner with engineers, ML practitioners, and product leaders to ensure our systems can scale with the speed of our ambitions. Hatch is a fast-moving team of builders solving real-world business problems with AI. We move quickly, take ownership, and care deeply about delivering outcomes. Our engineering culture prioritizes operational rigor, clean architecture, and velocity without compromising reliability. If you're energized by scale, speed, and owning infrastructure that powers AI workflows end-to-end — this is a role for you.

Requirements

  • 3+ years of experience in DevOps, SRE, or platform engineering roles in high-growth environments.
  • 3+ years of experience with AWS infrastructure and services, including networking, IAM, ECS/EKS, and serverless computing.
  • Strong experience with infrastructure-as-code (Terraform, Ansible) and CI/CD tooling (GitHub Actions, ArgoCD, etc.).
  • Experience supporting machine learning teams or MLOps platforms (e.g. model training pipelines, feature stores, model registry, online inference).
  • Strong knowledge of container orchestration (Kubernetes preferred) and observability stacks (Prometheus, Grafana, Sentry, DataDog, New Relic, etc.).
  • Proven ability to participate in architectural conversations and contribute to large-scale infrastructure improvements.
  • A bias toward simplicity, security, and reliability — you know when to build fast and when to build right.
  • Familiarity with at least one programming language; Python, Go, Erlang, Rust, etc.
  • Exposure to agentic programming workflows.
  • RHCE, RHCSA, or equivalent certifications preferred.

Responsibilities

  • Evolve our cloud infrastructure (AWS & GCP) using infrastructure-as-code tools like Terraform or Ansible.
  • Implement systems that support the compute-heavy and storage-intensive needs of machine learning and data processing pipelines.
  • Manage scalable, secure, and cost-efficient environments across dev, staging, and production.
  • Participate in an on-call rotation.
  • Collaborate with ML engineers to productionize models and manage workflows across training, testing, and deployment stages.
  • Implement infrastructure to support versioning, orchestration, and monitoring of ML models in production (e.g. using tools like Kubeflow, SageMaker, VertexAI, or custom pipelines).
  • Optimize data pipelines and model serving infrastructure for low-latency and high-throughput performance.
  • Drive the strategy for observability, logging, and alerting across distributed systems.
  • Lead incident response, root cause analysis, and system hardening for long-term resiliency.
  • Implement best practices for infrastructure security, container hardening, and network architecture.
  • Partner with engineering teams to bake DevOps best practices into the development lifecycle.
  • Build tooling and automation that improves developer velocity, release stability, and system transparency.

Benefits

  • Work at the intersection of infrastructure and machine learning at a company building real AI products with urgency and purpose.
  • Join a culture that expects technical leadership, fast decision-making, and relentless curiosity.
  • Partner with high-caliber engineers and product leaders in a tight-knit, fast-executing environment.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service