Senior Data Engineer

Foundation Partners GroupOrlando, FL

About The Position

We are looking for a Senior Data Engineer to design, build, and operate modern data platforms at scale. You will work across cloud-native Azure infrastructure, Microsoft Fabric, Azure SQL, and AWS, building reliable pipelines, performant data models, and self-service analytics that drive real business decisions. A meaningful part of this role involves containerized workload execution -- designing and deploying pipeline jobs using Azure Container Apps Jobs for scheduled and event-driven data processing. This is a hands-on role suited for an engineer who thrives in ambiguity, values clean architecture, and moves comfortably between strategy and implementation.

Requirements

  • Python: fluent in writing production-grade pipelines, data transformations, and automation scripts.
  • RDBMS: Advanced T-SQL and/or ANSI SQL; experience with SQL Server, Azure SQL DB, and cloud warehouse query engines (Redshift, Fabric).
  • MS Fabric: Warehouses, Lakehouses, Data Pipelines, OneLake, and Fabric's unified analytics model.
  • Azure ecosystem: Azure Data Factory, Azure SQL Database, Azure Data Lake Storage, Azure Key Vault, Azure Container Apps Jobs, Azure Container Registry, and related services.
  • Containerization: Docker image development, container registry management, and deploying workloads as Container Apps Jobs with schedule and event triggers, scaling rules, and environment variable/secret injection.
  • AWS data services: S3 for data lake storage, Redshift for cloud data warehousing.
  • Data modeling: dimensional modeling, star/snowflake schema design, and entity-relationship modeling for both OLTP and OLAP workloads.
  • Version control and DevOps: Git, GitHub, pull request workflows, and CI/CD pipelines.
  • Data Visualization: Power BI, Tableau.
  • Strong analytical problem-solving -- able to decompose ambiguous business problems into clean technical solutions.
  • Clear written and verbal communication with both technical peers and non-technical stakeholders.
  • Self-directed with strong attention to detail; comfortable owning work end-to-end.
  • 5 to 8 years of hands-on data engineering experience in production environments.
  • Proven track record designing and delivering data platforms on Azure and/or AWS.
  • Demonstrated experience migrating or modernizing legacy on-premises data infrastructure to cloud-native solutions.
  • Hands-on experience running workloads with Azure Container Apps Jobs or a comparable containerized job execution platform.

Nice To Haves

  • Experience with MS Fabric in a production capacity, including Fabric Warehouses and OneLake integration.
  • Familiarity with dbt (data build tool) or similar transformation frameworks.
  • Exposure to streaming or near-real-time data ingestion patterns (Event Hub, Kafka, Kinesis).
  • Experience with Workday, Adaptive Planning, or other ERP/FP&A source systems.
  • Power BI experience including semantic model development, dataset optimization, or DirectQuery/Import mode tradeoffs.
  • Agile/Scrum team experience; comfort working in iterative delivery cycles.
  • Relevant cloud certifications: Microsoft Azure Data Engineer (DP-203), AWS Certified Data Analytics, or equivalent.
  • Bachelor's degree in Computer Science, Information Systems, Data Science or a related field. In lieu of formal education, equivalent professional experience demonstrating the same depth of knowledge is accepted.

Responsibilities

  • Design and build scalable data pipelines using Python and cloud-native orchestration tools, including Azure Data Factory, Azure Container Apps Jobs, and Fabric Data Pipelines.
  • Architect data solutions across Microsoft Fabric Warehouses, Azure SQL Database, and AWS (S3, Redshift), selecting the right tool for the workload.
  • Implement Medallion/layered architecture patterns (Bronze to Silver to Gold) for structured, governed data delivery.
  • Manage and optimize large-scale data warehouse environments with a focus on performance, cost, and maintainability.
  • Develop Python-based ETL/ELT pipelines to ingest and transform data from APIs, flat files, databases, and SaaS platforms.
  • Build and deploy containerized pipeline jobs using Azure Container Apps Jobs, including scheduling, scaling rules, secrets management via Azure Key Vault, and integration with Azure Container Registry.
  • Build and maintain data movement between on-premises SQL Server environments and cloud targets.
  • Design idempotent, fault-tolerant pipeline patterns with robust logging, alerting, and retry logic.
  • Collaborate with analytics and reporting teams to deliver clean, well-documented data models for Power BI or similar BI tools.
  • Manage data infrastructure across Azure (Fabric, Azure SQL, Azure Data Lake, Key Vault, Container Apps, Container Registry) and AWS (S3, EC2, RDS/Redshift).
  • Containerize data workloads using Docker; deploy and operate them as Azure Container Apps Jobs for scheduled batch processing and event-triggered pipeline execution.
  • Implement infrastructure-as-code principles and version-controlled deployment practices using GitHub, Bicep or Terraform, and CI/CD tooling (Azure DevOps or GitHub Actions).
  • Monitor platform health, optimize compute and storage costs, and enforce data security and access governance.
  • Partner with data analysts, BI developers, software engineers, and business stakeholders to translate requirements into technical solutions.
  • Maintain thorough technical documentation: pipeline specs, data dictionaries, runbooks, and architecture diagrams.
  • Champion engineering best practices: code reviews, testing, modular design, and reusable frameworks.
  • Mentor junior engineers and contribute to team standards and knowledge sharing.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service