BI Cloud Datawarehouse Developer II

Weill Cornell Medical CollegeNew York, NY
1d

About The Position

Responsible for the technical design and development activities for implementation of large-scale data solutions primarily in Azure, but not limited to Azure, AWS or Snowflake to support BI solutions in the cloud. Designs architectures for data integration and processing to provide high quality datasets and utilize Big Data processing tools to build data pipelines on Azure technology stack.

Requirements

  • Bachelor's Degree in Computer Science or related field preferred; equivalent work experience will be considered
  • 5-10 years of experience in designing and delivering data warehouse implementations and performing in-depth analysis of complex transactional and operational source systems
  • Hands-on experience with Azure Data Lake, Azure Data Factory, Synapse Analytics
  • Hands-on experience with Delta tables and implementing delta logic on data lakes and warehouse
  • Hands-on experience with implementing multi layered data lake architecture
  • Migration of data from SAP Cloud and/or On-Premise environments to Cloud Data Warehouse systems
  • Demonstrate ability to develop complex SQL queries and Stored Procedures
  • Experience working in an Agile/Scrum environment
  • Excellent troubleshooting and problem-solving skills
  • Excellent verbal and written communication skills.

Nice To Haves

  • Power Bi experience is a plus
  • Hands-on experience in Python or Scala is preferred
  • Certifications in Azure or AWS technologies is a plus
  • Knowledge with SAP Finance or Human Capital Management modules highly desired
  • Experience in Higher Education industry highly desired
  • Knowledge in Research Administrative Systems and understanding of research attributes, workflows and processes

Responsibilities

  • Translates business requirements into data models that drive Data Lakes and/or Data Warehouse architectures.
  • Works with cross-functional teams to gather, document, and approve business requirements for data analysis and reporting projects.
  • Performs Data Ingestion Pipelines for collecting and transferring data from source systems in batch and real time using solutions such as change data capture (CDC) of Cloud and On-Premise Applications as Raw data or into Data Lakes.
  • Design, develop, optimize, and maintain Cloud Data Platform pipelines that adhere to ELT and ETL principles and business goals using Azure Data Factory, Azure Spark, AWS Glue, Snowpark and/or SAP BODS.
  • Design, develop and implement data integration processes to transform unstructured and disparate source data into the target Data Stores, Data lakes and Data Warehouses.
  • Models the data for varieties of reporting and Analytics requirements in cloud databases like RedShift, Azure Synapse, google BigQuery and/or Snowflake Data Warehouse.
  • Provides support in identifying existing data gaps, building new data pipelines, and providing automated solutions to deliver advanced analytical capabilities and enriched data to applications that support the organization's operations.
  • Reviews, manages, optimizes & recommends improving performance by adjusting the models, applications and its parameters.
  • Provides technical mentoring to other team members for best practices on data engineering and cloud technologies.
  • Analyzes and troubleshoots data related issues in a timely manner and maintain Cloud Data Management best practices and standards.
  • Identifies continuous process improvement opportunities and operationalize solutions for the same.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service