Azure Principal Data Engineer

Boston ScientificArden Hills, MN
Hybrid

About The Position

Boston Scientific was recognized by Forbes as one of the Best Workplaces for Engineers in 2026, reflecting a culture where engineers do meaningful work. The Azure Principal Data Engineer is a senior, hands-on technical leader responsible for designing, building and evolving scalable, reusable Azure data platform capabilities. In this role, you will transform governed data into trusted, production-ready data products that enable clinical insights, commercial analytics and AI-driven innovation. You will operate at the intersection of architecture, engineering execution and governance automation, shaping how data flows from source systems through ingestion, transformation, quality enforcement and semantic enrichment. This role partners closely with data engineering, platform engineering, security, site reliability engineering (SRE), DevOps and governance teams to deliver a modern data platform that operates as a product — not a collection of one-off solutions. At Boston Scientific, we value collaboration and synergy. This role follows a hybrid work model requiring employees to be in our local office at least three days per week. Boston Scientific will not offer sponsorship or take over sponsorship of an employment visa for this position at this time. Relocation assistance is not available for this position at this time. This is a defined-term role with an expected duration of 24 months from the employee’s start date. There is no guarantee of full-time employment beyond the listed duration.

Requirements

  • Minimum of 13 years' experience in data engineering, with a minimum of 6 years' experience designing and delivering cloud-native data platforms in Azure or comparable environments
  • Minimum of 4 years' experience building scalable data platform capabilities, including infrastructure-as-code, CI/CD, data integration (batch, streaming, CDC) and governance automation (data contracts, quality, lineage and access control)
  • Demonstrated experience with Azure data services, including Azure Data Factory, Azure Databricks, Azure Data Lake Storage, Microsoft Purview, Azure Event Hubs, Azure Functions and Azure Key Vault
  • Strong programming and platform engineering experience using Python, Terraform or distributed data processing frameworks such as Spark and Kafka
  • Experience with data modeling techniques, including dimensional modeling and Data Vault
  • Bachelor’s degree in computer science, data engineering, information systems or a related technical field
  • Experience working with regulated or sensitive data environments
  • Ability to travel up to 10% for workshops, migration planning and cross-functional collaboration

Nice To Haves

  • Master’s degree in computer science, data engineering or a related field
  • Experience with modern lakehouse and governance technologies, including Microsoft Fabric, Databricks Unity Catalog or Apache Iceberg
  • Experience with advanced data capabilities, including knowledge graphs, ontology modeling, semantic layer design or GenAI enablement
  • Experience supporting AI/ML platforms, including governed analytical workspaces and model lifecycle management
  • Familiarity with regulated industries such as medical devices, healthcare or life sciences
  • Experience leading or executing cloud data migrations, including AWS-to-Azure transitions and cross-cloud data transfer strategies
  • Hands-on experience with AWS data services, including S3, RDS, Redshift, Glue, Lambda, Kinesis, DMS, Step Functions and ECS

Responsibilities

  • Design and implement reusable Azure data platform patterns supporting governed ingestion, batch, streaming, CDC and event-driven processing, multi-paradigm storage and data product delivery
  • Lead migration of complex data pipelines, including event-driven ingestion, CDC-to-stream processing, dbt transformations and API-based batch loads, ensuring stage-gated reconciliation and validated cutovers
  • Translate governance requirements into automated platform capabilities, including data contracts, schema validation, data quality rules, lineage capture, metadata management and access control enforcement
  • Build and maintain infrastructure-as-code modules, CI/CD pipeline templates and Terraform-based provisioning frameworks to enable scalable, governed data platform deployment
  • Architect platform capabilities for security, observability, resiliency and cost management, including encryption, identity and access management, telemetry, SLOs and cost attribution
  • Enable semantic data modeling by contributing to ontology-backed data layers and knowledge graph integration, supporting consistent definitions and GenAI-ready data foundations
  • Evaluate and recommend platform technologies, document architectural decisions and establish engineering standards to elevate data platform maturity
  • Mentor and coach engineers, promoting best practices and strengthening engineering capabilities across the organization

Benefits

  • Relocation assistance is not available for this position at this time.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service