About the position
The job overview for this role is to design and build LULA's Data Platform for insurance and product teams. This includes building reliable, efficient, testable, and maintainable data pipelines using tools such as Fivetran, Talend, Matillion, and Apache Airflow. The role also involves designing data models for optimal storage and retrieval, as well as influencing logging practices to support data flow. The ideal candidate should have at least 8 years of experience in data engineering, proficiency with technologies like dbt, BigQuery, Redshift, and Postgres, and expertise in Python and custom ETL/ELT processes.
Responsibilities
- Design and build LULA's Data Platform used by insurance and product teams
- Build reliable, efficient, testable, and maintainable data pipelines using tools like Fivetran, Talend, Matillion, Apache Airflow, or similar
- Design data models for optimal storage and retrieval and to meet critical product requirements
- Understand and influence logging to support data flow and architect logging best practices where needed
- Bring at least 8 years of experience in data engineering
- Utilize technologies such as dbt, BigQuery, Redshift, and Postgres
- Demonstrate proficiency with Python
- Have experience with custom ETL/ELT and programming/scripting languages
Requirements
- At least 8 years of experience in data engineering
- Technologies (dbt, BigQuery, Redshift, Postgres)
- Proficiency with Python
- Experience with custom ETL/ELT and programming/scripting language experience
Benefits
- Competitive salary and compensation package
- Opportunity to work with cutting-edge technologies and tools (Fivetran, Talend, Matillion, Apache Airflow, dbt, BigQuery, Redshift, Postgres, Python, Looker, Tableau)
- Chance to design and build a data platform used by insurance and product teams
- Ability to build reliable, efficient, testable, and maintainable data pipelines
- Opportunity to design data models for optimal storage and retrieval
- Influence and architect logging best practices to support data flow
- Work in a fast-paced startup environment
- Chance to visualize and model raw data sets using a data visualization tool
- Experience in custom ETL/ELT and programming/scripting languages
- Opportunity to build and optimize data pipelines, architecture, and data sets
- Design and deploy high-performance systems with reliable monitoring and logging practices