Responsibilities: Build and maintain secure and compliant data processing pipelines, tables, views, procedures and datasets Tune and monitor data pipelines to maximize their performance over time Collaborate with analysts and data scientists to create value from data and build the production infrastructure to deliver and act upon those insights at scale Integrate, transform, and consolidate data from various structured and unstructured data systems into structures that can support analytics, data science, and machine learning Explore second and third party data sources that can drive value and design smart strategies to acquire and standardize it to support apps and analytics Monitor, troubleshoot and resolve issues in production environments Review ETL/ELT code and pipelines and assess tools and procedures for enhanced data validation, code quality and engineer efficiency Requirements: 6+ years prior related experience required with Bachelor's degree or 4+ years of prior related experience required with a Master's degree required Bachelor's Degree in Computer Science, Software Engineering, or other related degree requiredMaster's Degree in Computer Science, Software Engineering, or other related degree preferred3+ years of experience with large data sets required 1+ years with Cloud data experience required Software Design Concepts Snowflake, SQL ServerETL, Azure Data Factory, Synapse Analytics, DBT and Databricks Azure Monitor, Logic Apps Azure Storage Data Lake, Delta Lake Data Modelling using Kimball/Star Schema/Snowflake Schema Cloud Development (Preferably Azure) Programming (Python/Scala/Java/VB and etc) Azure DevOps & CI/CD Agile Framework Software Architecture #LI-HH1
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Number of Employees
501-1,000 employees