About The Position

You will lead the evolution of DataOps practices at a global scale, designing highly automated, resilient, and scalable data platforms. This role focuses on building self-service, microservices-based data infrastructure on GCP, enabling rapid deployment, strong data reliability, and continuous delivery through advanced automation and observability.

Requirements

  • 8+ years of progressive experience in DataOps, Data Engineering, or Platform Engineering roles.
  • Strong expertise in data warehousing, data lakes, and distributed processing technologies (Spark, Hadoop, Kafka).
  • Advanced proficiency in SQL and Python; working knowledge of Java or Scala.
  • Deep experience with Google Cloud Platform (GCP) data and infrastructure services.
  • Expert understanding of microservices architecture and containerization (Docker, Kubernetes).
  • Proven hands-on experience with Infrastructure as Code tools (Terraform preferred).
  • Strong background in CI/CD methodologies applied to data pipelines.
  • Experience designing and implementing data automation frameworks.
  • Advanced knowledge of data orchestration, monitoring, and observability tooling.
  • Ability to architect highly scalable, resilient, and fault-tolerant data systems.
  • Strong problem-solving skills and ability to operate independently in ambiguous environments.

Nice To Haves

  • Experience with real-time streaming systems at very large scale.
  • Exposure to AWS or Azure data platforms (in addition to GCP).
  • Experience with data quality tooling and governance frameworks.
  • Background building internal developer platforms or self-service infrastructure.
  • Experience influencing technical strategy across multiple teams or domains.

Responsibilities

  • Lead the design and implementation of enterprise-scale DataOps platforms and automation frameworks.
  • Architect and evolve GCP-native data platforms supporting high-throughput batch and real-time workloads.
  • Design and implement microservices-based data architectures using containerization technologies.
  • Build and maintain CI/CD pipelines for data workflows, including automated testing and deployment.
  • Develop Infrastructure as Code (IaC) solutions to standardize and automate platform provisioning.
  • Implement robust data orchestration, monitoring, and observability capabilities.
  • Establish and enforce data quality frameworks to ensure reliability and trust in data products.
  • Support real-time data platforms operating at extreme scale.
  • Partner with platform squads to deliver self-service data infrastructure products.
  • Drive best practices for automation, resiliency, scalability, and operational excellence.
  • Influence technical direction, mentor senior engineers, and lead through ambiguity.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service