Lead Azure Data Engineer

Hexion CareersWorthington, OH
1d

About The Position

Hexion’s ED&A team needs a Senior Azure Data Engineer to develop and manage a scalable Azure data platform that combines SAP ECC, SAP BW, and other sources for reliable enterprise data and modeling.

Requirements

  • Bachelor’s degree in Computer Science or a related field (or equivalent experience).
  • 7+ years in Azure data engineering with production ownership.
  • Strong proficiency in Azure Databricks, Python (PySpark), and SQL.
  • Databricks Unity Catalog experience required (catalog/schema design, permissions/access control, governance).
  • Understanding of best practice data lake and lake house design patterns (e.g., Kimball-based star and snowflake data modeling).
  • Databricks DLT experience required (DLT pipelines, DLT tables, expectations/data quality).
  • Strong Delta Lake performance, tuning and operational best practices.
  • Experience integrating ADLS Gen2 and orchestrating with Azure Data Factory.
  • Strong cross-functional communication and ownership mindset.
  • Pragmatic problem-solver with continuous improvement orientation.
  • Comfortable operating in an environment with evolving priorities.

Nice To Haves

  • SAP ECC/BW integration experience (knowledge of SAP tables, CDS/OData).
  • CI/CD for data pipelines (Azure DevOps, Git).
  • Exposure to feature engineering and lifecycle practices for advanced modeling use cases (testing, reproducibility, monitoring).
  • Familiarity with enterprise data governance, metadata management, and security patterns.
  • Strong cross-functional communication and ownership mindset.
  • Pragmatic problem-solver with continuous improvement orientation.
  • Comfortable operating in an environment with evolving priorities.

Responsibilities

  • Build, manage and optimize data pipelines using Azure Databricks (PySpark), SQL, ADLS Gen2, and Azure Data Factory; enabling automated, incremental, and standardized data ingestion and deployment.
  • Build and maintain Delta Live Tables (DLT) pipelines in Databricks, including DLT tables, data quality expectations, and multi-layer (Bronze/Silver/Gold) patterns.
  • Implement Unity Catalog for centralized access control, auditability, and lineage-aligned governance.
  • Deliver curated, reusable datasets that will be treated as Data Products that support enterprise performance management and advanced modeling workloads.
  • Prepare structured and document-style datasets for indexing and retrieval workflows (e.g., embedding/vector-ready content and metadata standards for AI enablement), facilitating downstream knowledge and automation solutions.
  • Implement monitoring, alerting, logging, and cost/performance tuning; create operational runbooks to support SLAs.
  • Lead code reviews, define engineering standards, and mentor junior developers; partner with stakeholders to prioritize high-impact improvements.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service