Sr. Infrastructure Engineer

Edison ScientificSan Francisco, CA
9h$200,000 - $350,000Onsite

About The Position

About Edison Scientific focuses on building and commercializing AI agents for science, and shares FutureHouse’s mission to build an AI Scientist - scaling autonomous research, productizing it, and applying it to critical challenges such as drug development. Role As a Senior Infrastructure Engineer, you'll play a key role in designing, scaling, and operating the core platform infrastructure that powers autonomous scientific discovery. Your primary focus will be the orchestration for our agents at scale — building and managing clusters that orchestrate thousands of persistent, stateful workloads, developing custom resource definitions (CRDs) and operators, and ensuring the reliability and efficiency of our compute layer at scale. Our mission is to build an AI scientist, and you'll own the infrastructure foundation it runs on. AI agents performing long-running scientific research demand resilient scheduling, lifecycle management, and resource orchestration far beyond typical cloud-native workloads. This role will influence platform architecture, establish infrastructure best practices, and partner closely with backend engineers, ML engineers, and researchers to deliver a production-grade environment that lets science move faster. At Edison Scientific, engineering at the senior level is about technical ownership and leverage- understanding how complex systems interact, making sound architectural tradeoffs, and building foundations that allow teams and science to move faster.

Requirements

  • 5+ years of professional infrastructure or platform engineering experience, with deep hands-on Kubernetes expertise in production environments.
  • Experience designing and implementing custom resource definitions (CRDs) and Kubernetes operators (using frameworks such as Kubebuilder, Operator SDK, or controller-runtime).
  • Track record of operating and scaling Kubernetes clusters supporting thousands of persistent or long-lived resources (stateful workloads, persistent pods, long-running jobs).
  • Deep understanding of Kubernetes internals — API server, etcd, scheduler, controller manager, kubelet — and how they behave at scale.
  • Expertise with cloud infrastructure (AWS EKS, GCP GKE, or Azure AKS) and associated networking, storage, and IAM primitives.
  • Proficiency in at least one systems or backend language for operator development and infrastructure tooling.
  • Hands-on experience with infrastructure-as-code tools (Terraform, Pulumi, or Crossplane) and GitOps workflows.
  • Strong working knowledge of container networking (CNI plugins, service mesh, network policies), storage (CSI, persistent volumes, StatefulSets), and security (RBAC, Pod Security Standards, secrets management).
  • Ability to operate autonomously, make sound technical judgments, and drive projects from concept through production.

Nice To Haves

  • Experience with data-intensive platforms, scientific computing, or ML/AI infrastructure.
  • Prior experience in startups or small teams with significant architectural ownership and ambiguity.
  • Experience scaling systems, teams, or platforms through periods of rapid growth.

Responsibilities

  • Architect, implement, and operate Kubernetes clusters that support thousands of concurrent, persistent resources (agents, jobs, services) with high availability and efficient resource utilization.
  • Design and develop custom resource definitions (CRDs) and Kubernetes operators to model and manage domain-specific workloads such as AI agent lifecycles, research pipelines, and long-running compute tasks.
  • Drive the strategy for cluster scaling, node pool management, autoscaling policies, and resource quota frameworks to handle rapid workload growth.
  • Build and maintain infrastructure-as-code (Terraform, Pulumi, or similar) for reproducible, version-controlled environment management.
  • Design and implement robust scheduling, placement, and affinity strategies to optimize cost, performance, and fault tolerance for heterogeneous workloads (CPU, GPU, memory-intensive).
  • Establish and uphold best practices around observability, monitoring, alerting, and incident response for infrastructure systems (Prometheus, Grafana, Datadog, or similar).
  • Own storage and networking strategy within Kubernetes — including persistent volume management, CSI drivers, service mesh, network policies, and ingress architecture.
  • Troubleshoot complex, cross-system infrastructure issues and guide others through effective debugging and remediation in distributed environments.
  • Collaborate closely with backend, ML, and research teams to understand workload requirements and translate them into reliable infrastructure patterns.

Benefits

  • equity
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service