Data Engineer II, Analytics
Vimeo
·
Posted:
May 4, 2023
·
Onsite
About the position
Vimeo is seeking an experienced Data Engineer II, Analytics to join their Data Architecture and Analytics Engineering team. The ideal candidate will work closely with Data Analysts and Data Scientists to create and maintain robust, scalable, and sustainable data models that provide decision-making insights for senior leadership including executives. The role involves building data models that can support dynamic and efficient data analysis, ensuring alignment to coding standard methodologies and development of reusable code, and partnering with Data Analysts to understand business processes and identify opportunities to build scalable and efficient data models that can be used for scalable, refreshable analysis and reporting. The candidate should possess a BS/MS in Computer Science or a related technical field, 2+ years of experience working with Analytics and Data Engineering teams with a mixed data engineering and analytics background, and 2+ years of experience in scalable data architecture, fault-tolerant ETL, and monitoring of data quality in the cloud. Proficiency in SQL, Python, Dimensional Modeling, Data pipeline development, workflow management, and orchestration tools, ETL optimization and best practices, Snowflake or other column-oriented and cloud-based databases, Looker or any similar Business Intelligence tool, and Relational Databases is required. Bonus skills include DBT, Git/Github, Looker/Tableau, large data sets (terabyte scale), and Apache Airflow.
Responsibilities
- Build data models that can support dynamic and efficient data analysis, and collaborate with other teams to maintain and evolve those models over time
- Ensure alignment to coding standard methodologies and development of reusable code
- Partner with Data Analysts to understand business processes and identify opportunities to build scalable and efficient data models that can be used for scalable, refreshable analysis and reporting
- Create and enable the generation of ad-hoc / on-demand data sets for use by analysts and data scientists
- Work with Engineering teams to set up monitoring and alerting systems for business and product KPIs
Requirements
- Build data models that can support dynamic and efficient data analysis, and collaborate with other teams to maintain and evolve those models over time
- Ensure alignment to coding standard methodologies and development of reusable code
- Partner with Data Analysts to understand business processes and identify opportunities to build scalable and efficient data models that can be used for scalable, refreshable analysis and reporting
- Create and enable the generation of ad-hoc / on-demand data sets for use by analysts and data scientists
- Work with Engineering teams to set up monitoring and alerting systems for business and product KPIs
- BS/MS in Computer Science or a related technical field.
- 2+ years working with Analytics and Data Engineering teams with a mixed data engineering and analytics background
- 2+ years of experience in scalable data architecture, fault-tolerant ETL, and monitoring of data quality in the cloud
- Experience working on or leading initiatives around data governance, master data management, data catalogs, and enterprise data warehouse architecture
- Strong analytical skills, data sensibility, and an able communicator
- Open to working on multiple projects simultaneously
- Proficiency in SQL, Python, Dimensional Modeling, Data pipeline development, workflow management, and orchestration tools, ETL optimization and best practices, Snowflake or other column-oriented and cloud-based databases, Looker or any similar Business Intelligence tool, Relational Databases
- Bonus Points (Nice Skills to Have, but Not Needed): DBT, Git / Github, Looker/Tableau is a big plus, large data sets (terabyte scale), Apache Airflow