As an intern at Hunkemöller, you will deep dive into core datasets in BigQuery, enriching the data catalog with detailed column-level descriptions and business context. You will classify data assets, establish data lineage, and curate a comprehensive data dictionary to enhance data discovery and governance. Your role will involve quantifying and optimizing the cost/performance trade-offs of dbt materialization strategies (table, incremental, view) and BigQuery billing models (Standard vs. Editions) for high-traffic models. You will implement environment-specific optimization strategies and develop a pipeline to push dbt run metadata (e.g., last run time) to Looker, enabling intelligent cache refreshes and improved dashboard performance. Additionally, you will design and deploy a data quality monitoring loop that triggers Looker alerts based on failed dbt data quality tests, and measure incident response times. You will create a dbt-based process to automatically detect and remediate schema drift in source data, including generating alerts, model stops, or auto-migrations. Analyzing usage data to identify which dbt models drive the most value (e.g., most viewed dashboards, frequently run explores) and recommending models for deprecation or further optimization will also be part of your responsibilities. Lastly, you will assist in migrating legacy data models to the current data warehouse and support the deprecation of obsolete models.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Career Level
Intern
Education Level
Bachelor's degree