About The Position

We are seeking a Big Data Infrastructure Engineer with excellent development and automation expertise to design, implement, and optimize reliable infrastructure services for data pipelines. The ideal candidate will be a self-motivated individual, committed to continuously improving systems and processes to achieve exceptional results. You will leverage a diverse set of tools and technologies to drive innovation and enhance the efficiency of our data infrastructure. About the Team The Big Data Infrastructure team is responsible for building and operating the large-scale data platforms that power Zoom’s products. The team collects and processes data from multiple product sources, including server logs, databases, and client telemetry. Transforms the data into a unified data lake that supports both operational troubleshooting and long-term business insights. The team owns the end-to-end big data infrastructure, including data ingestion, streaming and batch processing, storage, governance, monitoring, and access control. They are transitioning from managed cloud services to a custom, open-source, Kubernetes-based big data platform.

Requirements

  • Possess 5+ years of recent experience as a Big Data infrastructure engineer and hold a Bachelor’s degree or higher in Computer Science or a related field.
  • Report to the hiring manager and collaborate with multiple teams within the Zoom Meeting Vertical.
  • Develop and automate infrastructure build-up and changes using Java, Python, and shell scripting, leveraging AWS manageability and automation interfaces.
  • Demonstrate experience with AWS, OCI, and Azure cloud services (e.g., EC2, S3, VPC), and apply strong system design and infrastructure management principles.
  • Manage and develop Big Data platforms based on Spark, Flink , Trino , and alternative open-source solutions running on Kubernetes, including platform and service troubleshooting.
  • Leverage programming experience with Java, Terraform, Ansible, and other infrastructure-as-code tools, while learning new languages as needed and demonstrating a collaborative mindset focused on quality work and team problem-solving.

Responsibilities

  • Understanding Big Data architecture and researching, providing proven solutions for large-scale data processing platforms and defining automated approaches to build and update Big Data infrastructure.
  • Developing, enhancing, and maintaining open source data analytics services running on Kubernetes, while leading major projects end to end across internal and external teams.
  • Designing and implementing advanced monitoring systems for the Big Data platform, defining operational and service-quality metrics.
  • Establishing goals and quality standards to continuously improve platform reliability and meet business and operational requirements.
  • Collaborating closely with Big Data engineers to ensure smooth data ingestion, transformation, and data-at-rest processes.
  • Driving issue resolution by supporting issue replication and performing root cause analysis in development environments.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service