Design and implement end-to-end data pipelines (ETL/ELT) that ingest, process, and curate large-scale enterprise data, including telemetry/vehicle data and other structured/unstructured sources. Migrate and modernize data assets to a centralized data platform (e.g., BigQuery) using principled data lake/warehouse architectures (Bronze/Silver/Gold or Medallion architecture) to power analytics and reporting. Architect scalable data models and data warehouses, optimizing for query performance, maintainability, and cost efficiency. Develop and operate robust orchestration pipelines using Airflow/Astronomer or Schedule Query, with secure, reproducible CI/CD workflows (Terraform + Git). Build and maintain reliable data quality checks, lineage, and monitoring with observability tools (e.g., Splunk, Looker/Grafana/Tableau/Power BI dashboards) to rapidly detect and resolve data issues. Implement data governance, security, and compliance controls (data lineage, access controls, PII/PHI protection) in collaboration with security and privacy teams. Lead the design and delivery of analytics-ready data assets for cross-functional teams, including dashboards, alerts, and self-service analytics. Mentor and coach junior engineers, review code, and drive best practices in data engineering, testing, and documentation. Collaborate with data scientists, product managers, and business stakeholders to translate requirements into scalable data solutions and timely insights. Monitor cost and capacity planning for cloud resources; optimize storage and compute usage across GCP services (BigQuery, Dataflow, Dataproc, GCS). Participate in on-call rotations and incident response to maintain high availability of data services. Established and active employee resource groups
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level