About The Position

This staff level role leads the technical strategy and execution for a large-scale data platform built on Google Cloud. The platform transforms extensive customer datasets into advanced analytical outputs and derived metrics. As the most technically skilled engineer on the team, this person drives architectural direction, mentors engineers, and ensures the platform is secure, scalable, and reliable. The role includes close collaboration with a dedicated Data Science team that provides statistical methods and analytical models, which this engineer will help integrate and operationalize.

Requirements

  • 8+ years of software engineering experience, including 3+ years in senior or staff-level positions.
  • Deep experience building distributed data systems using GCP services, including BigQuery, Dataflow (Beam), Dataform, and Pub/Sub.
  • Strong programming skills in Python, Java, Scala, or similar languages used in data engineering.
  • Strong understanding of data modeling, partitioning strategies, schema evolution, and cost-efficient query optimization.
  • Experience designing security-first data pipelines with privacy-preserving transformations.
  • Ability to break ambiguous problems into structured designs and actionable plans.
  • Demonstrated ability to mentor engineers and influence technical direction across teams.
  • Strong experience integrating data-science workflows or ML/analytical models into production systems.

Nice To Haves

  • Experience with HCM, payroll, or other HR/enterprise data domains.
  • Background supporting analytics-driven products, metrics systems, or derived-indicator pipelines.
  • Experience using Dataform for SQL transformation management and production model organization.
  • Familiarity with statistical concepts or time-series patterns (not for model creation, but for effective collaboration with Data Science).
  • Experience with data lineage, data-quality scoring, observability systems, and automated validation frameworks.
  • Experience with Cloud Composer, Dataproc, or streaming architectures for near-real-time data processing.
  • Track record shaping long-term architecture or leading cross-organizational platform initiatives

Responsibilities

  • Architect and evolve a cloud-native data platform using BigQuery, Dataflow, Dataform, Pub/Sub, Dataproc, Cloud Storage.
  • Build scalable data ingestion, transformation, and aggregation pipelines that support high-volume HCM datasets.
  • Collaborate with the Data Science team to operationalize their models, quantitative logic, and statistical frameworks—without owning statistical methodology design.
  • Implement end-to-end data workflows that generate aggregated insights and derived metrics at scale.
  • Engineer strong privacy, compliance, and anonymization mechanisms into all data pipelines (masking, thresholding, auditing).
  • Translate analytical requirements into technical designs and execution plans in partnership with product and data stakeholders.
  • Lead architecture reviews, performance tuning, cost optimization, and reliability engineering initiatives.
  • Mentor engineers; guide design reviews, coding standards, and technical best practices.
  • Troubleshoot complex issues involving distributed processing, data-quality anomalies, or system bottlenecks.
  • Introduce new GCP tools or frameworks as needed to support long-term scalability and maintainability.

Benefits

  • wellness programs
  • tuition reimbursement
  • U Choose — a customizable expense reimbursement program that can be used for more than 200+ needs that best suit you and your family, from student loan repayment, to childcare, to pet insurance

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service