Adelante Health Care-posted about 2 months ago
Full-time • Mid Level
Phoenix, AZ
501-1,000 employees
Ambulatory Health Care Services

The Data Engineer will build scalable, trusted, and secure data models and pipelines that support analytics, reporting, and operations across our healthcare organization. The Data Engineer plays a critical part in delivering validated, governed, and actionable data products that drive decision-making throughout the business. This position involves end-to-end ownership of data workflows, including ingestion, transformation, modeling, and delivery. It requires strong proficiency in SQL and Python, as well as deep experience with Azure data services such as Synapse, Data Factory, Logic Apps, and Microsoft Fabric. The Data Engineer is responsible for applying continuous validation at every stage of the process and maintaining clear, collaborative communication with both stakeholders and technical team members to ensure data accuracy, alignment, and reliability.

  • Build and maintain robust data pipelines that ingest, transform, validate, and deliver data from multiple internal and external sources.
  • Design data models that align with business use cases and support high-performance reporting and analytics.
  • Implement data validation logic to ensure quality, trust, and accuracy in all downstream outputs.
  • Automate workflows using Logic Apps and Data Factory to support real-time and batch use cases.
  • Ensure pipeline observability, logging, and monitoring for proactive issue detection and resolution.
  • Collaborate with data analysts, BI developers, and business stakeholders to translate requirements into engineering solutions.
  • Provide secure, governed access to data through role-based access controls and RLS configurations.
  • Participate in DevOps workflows to version-control and automate deployments across environments.
  • Publish curated, validated datasets to Power BI workspaces and Fabric Lakehouses, supporting enterprise-wide reporting.
  • Maintain documentation, data dictionaries, and flow diagrams to ensure transparency and maintainability.
  • Bachelor's degree in computer science, Information Systems, Engineering, or a related technical field, preferred.
  • Three (3) - five (5) years of experience
  • Strong proficiency in SQL for large-scale data transformation, query optimization, and schema development.
  • Advanced Python skills for scripting ETL/ELT logic, validation routines, automation, and integration.
  • Demonstrated ability to design and implement scalable, maintainable data pipelines using Azure Data Factory, Synapse Pipelines, and Logic Apps.
  • Expertise in data modeling, including normalized and dimensional models (star/snowflake), surrogate keys, SCD handling, and business-friendly schema design.
  • Experience implementing robust data validation and quality frameworks, including completeness, accuracy, integrity, and anomaly detection.
  • Proficient with Azure Synapse Analytics (dedicated and serverless pools), Azure Data Lake Storage, and Logic Apps for managing ingestion, transformation, and orchestration.
  • Working knowledge of data security practices and compliance (HIPAA, PHI), including RBAC, RLS, and secure data delivery.
  • Familiarity with CI/CD pipelines and infrastructure-as-code using Azure DevOps or GitHub Actions.
  • Ability to deliver trusted, analytics-ready datasets to business teams via Microsoft Fabric Lakehouses and Power BI.
  • Experience supporting Power BI development by preparing validated semantic models, implementing row-level security, and enabling performance-optimized report layers - always backed by solid pipeline foundations.
  • Certification to perform cardiopulmonary Resuscitation for the Health Care Professional (CPR) and AED through courses that follow the guidelines from the American Heart Association and Red Cross (cognitive and skills evaluations)
  • Valid Level One Fingerprint Clearance Card issued by the Arizona Department of Public Safety for all specialty behavioral health locations or ability to obtain within 30 days of employment
  • Prioritization and multi-task skills are required
  • Competency in working with people of various cultures
  • Ability to perform a variety of assignments requiring considerable exercise of independent judgment
  • Knowledge of healthcare interoperability standards such as FHIR or HL7, or experience working with clinical data models or claims data.
  • Exposure to Microsoft Purview or other data cataloging tools for metadata management, lineage tracking, and automated governance.
  • Background in building or supporting machine learning pipelines in Azure using Synapse ML, Azure ML, or Databricks.
  • Experience with Delta Lake or lakehouse optimization strategies, including schema evolution, time travel, and ACID transactions on big data.
  • Proficiency in performance tuning for large-scale BI environments, including query optimization, caching strategies, and dataset partitioning.
  • Ability to lead data architecture discussions, contribute to platform design decisions, or mentor junior engineers on engineering best practices.
  • Understanding of data privacy engineering, such as implementing differential privacy, data anonymization, or secure multi-party computation in analytics workflows.
  • Hands-on experience with cross-cloud or hybrid-cloud architectures, especially integrating Azure with AWS or on-premise systems.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service