Senior Enterprise Data Warehouse Developer

StackAdapt
2d$100,000 - $120,000Remote

About The Position

StackAdapt is the leading technology company that empowers marketers to reach, engage, and convert audiences with precision. With 465 billion automated optimizations per second, the AI-powered StackAdapt Marketing Platform seamlessly connects brand and performance marketing to drive measurable results across the entire customer journey. The most forward-thinking marketers choose StackAdapt to orchestrate high-impact campaigns across programmatic advertising and marketing channels. Intro: StackAdapt is a self-serve advertising platform specializing in multi-channel solutions. We are a hub of innovation, imagination, and creativity. We have an exciting opportunity in the newly formed Enterprise Data Office (EDO) with its mandate to serve the business leaders and stakeholders at StackAdapt with trusted official reporting and governed self-service analytics. The Senior EDW Developer will have accountability over, and be responsible for building and supporting the many data pipelines, in accordance with enterprise data warehouse and development best practices, as well as supporting data operations within the EDO. This role will build data pipelines based on the design of the Enterprise Datawarehouse Architect that are necessary for the delivery of data solutions that meet or exceed the data needs of the business, as articulated by the Manager of Business Data Analysis in the BRD/FRDs in consultation with the business.. This includes building ingestion data pipelines from a variety of data sources and building ETL for data models that are fit-for-purpose for immediate business consumption as well as fulfilling broader reporting and analytical use cases. This role will build the data transformation pipelines to materialize the data model, configure and automate the orchestration of the pipelines to provide seamless execution, and collaborate with BI Engineers to ensure our business stakeholders’ needs are consistently met as they consume data from within the BI platform. StackAdapt is a Remote-First company. We are open to candidates located anywhere in Canada for this position.

Requirements

  • Minimum 8 years of experience building data pipelines/ETL within an Enterprise Data Warehouse environment
  • Hands-on experience building ETL/ELT data pipelines via custom-coded scripts (e.g., Spark, Python, JAVA, SQL stored procedures) AND via integration platforms (e.g., Coalesce, PowerCenter, DataStage, Talend)
  • Knowledgeable in data warehousing architecture fundamentals (e.g., Kimball/Inmon methodology, dimensional modeling, conformed dimensions, SCDs, etc)
  • Hands-on experience administering cloud-based data warehouses; Snowflake preferred, and big data mediums (e.g., Iceberg, Delta Lake, Parquet, Avro) including tasks such as RBAC, PII data masking, administering user and service accounts and their access via OAuth/key pair authentication leveraging secret management services such as AWS KMS, Hashicorp Vault etc..
  • Proven experience working with cloud-based platforms; AWS preferred; with knowledge of container technologies such as Kubernetes, data security, and networking fundamentals great assets
  • Highly experienced in orchestrating data operations via tools such as Apache Airflow, Kestra, Cron etc.; knowledge of Infrastructure-as-Code (e.g., Terraform) a great asset
  • Keen analytical and troubleshooting skills to diagnose issues and identify root causes
  • Excellent written and verbal communication skills

Responsibilities

  • Build reliable and scalable data ingestion pipelines to extract data from a variety of data sources including databases (e.g., RDBMS/NOSQL/file stores), applications (via API), flatfiles, etc into the Data Lake with appropriate metadata tagging
  • Orchestrate data pipelines via batch, near-real-time, or real-time operations depending on requirements to ensure a seamless and predictable execution using enterprise grade orchestration platforms
  • Contribute feedback to data architecture, data modeling, data tooling, data operations, and data platform decisions
  • Support the day to day operation of the EDO pipelines by monitoring alerts and investigating, troubleshooting, and remediating production issues
  • Coach and mentor junior and intermediate data engineers and ETL developers as the EDO team grows

Benefits

  • Highly competitive salary
  • Retirement/ 401K/ Pension Savings globally
  • Competitive Paid time off packages including birthday's off!
  • Access to a comprehensive mental health care program
  • Health benefits from day one of employment
  • Work from home reimbursements
  • Optional global WeWork membership for those who want a change from their home office and hubs in London and Toronto
  • Robust training and onboarding program
  • Coverage and support of personal development initiatives (conferences, courses, books etc)
  • Access to StackAdapt programmatic courses and certifications to support continuous learning
  • An awesome parental leave program
  • A friendly, welcoming, and supportive culture
  • Our social and team events!
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service