Senior Data Engineer

The Fedcap Group

About The Position

The Fedcap Group (TFG) is seeking a transformational and highly strategic Data Engineer to architect and lead the enterprise data warehouse, and data capabilities. This role is instrumental in enabling operational excellence, mission alignment, and scalable growth across TFG’s international network. Reporting to the Head of Data and Analytics.

Requirements

  • 5+ years of proven experience in data engineer roles.
  • Deep expertise in enterprise system implementations, data lifecycle management, modular framework and data platform architecture.
  • Strong hands-on experience with dbt , Azure and snowflake are a must.
  • Demonstrated ability to design and implement scalable, secure and modular data pipeline.
  • Experience with data quality frameworks, lineage and governance practice.
  • Track record of delivering end-to-end data solutions in cloud environments.
  • Bachelor’s degree in information systems, Computer Science, Engineering, or related field.

Nice To Haves

  • Advanced degrees in related fields are plus, however hands-on experience is strongly preferred.
  • Snowflake Snowpro Advanced  Data Engineer / Architect certification (Preferred).
  • Data Governance certifications (preferred).

Responsibilities

  • Collaborate with the Head of Data and Analytics to implement the enterprise Medallion Architecture (Bronze → Silver → Gold)
  • Design, build, and maintain data ingestion pipelines in Azure Data Factory (ADF) to move data from diverse sources into Azure Data Lake Storage Gen2 (Bronze).
  • Configure and manage secure integrations between Azure and Snowflake, including external stages, storage integrations, and automated ingestion patterns (Snowpipe, Streams, Tasks).
  • Develop and optimize Snowflake data models (fact, dimension, staging tables) aligned to Bronze–Silver–Gold architecture and business KPIs.
  • Implement role-based access control (RBAC), data masking, and row/column-level security in Snowflake to ensure data privacy and compliance.
  • Build and maintain a modular dbt framework, including models, macros, tests, and snapshots, to enforce data quality and accelerate transformations.
  • Create and manage CI/CD pipelines for dbt using GitHub Actions or Azure DevOps, ensuring reliable deployments across environments.
  • Write and optimize complex SQL and Python scripts to automate workflows, monitor data pipelines, and troubleshoot production issues.
  • Implement data validation, quality checks, and monitoring frameworks to ensure freshness, accuracy, and reliability of data products.
  • Collaborate directly with BI, Analytics, and Data Science teams to deliver curated, business-ready datasets.
  • Take end-to-end ownership of assigned data engineering projects: requirements -design - build - deploy - support.
  • Document pipelines, transformations, and models to ensure reproducibility and team-wide adoption
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service