Senior Data Quality Engineer

Sidley Austin LLPChicago, IL
Hybrid

About The Position

The Senior Data Quality Engineer is a hands-on technical role responsible for designing and building robust, scalable, end-to-end testing frameworks for modern data pipelines. This role focuses on ensuring data quality across ingestion (APIs, SQL Server, flat files) and transformation using Medallion Architecture (Bronze, Silver, Gold). The ideal candidate has strong experience in data quality engineering, Python and SQL, and building automated validation frameworks using tools such as Great Expectations. Experience leveraging AI-assisted development tools (e.g., Claude Code) and prompt engineering to accelerate test generation and standardization is highly valued. This role is critical to ensuring reliable, high-quality data products that power downstream analytics in tools such as Power BI and Tableau.

Requirements

  • Bachelor’s degree in Computer Science, Information Technology, or related field (or equivalent experience)
  • A minimum of 5 years of experience in data quality engineering, data testing, or data platform engineering
  • Strong experience building automated testing frameworks for data pipelines
  • Hands-on experience with data validation tools (Great Expectations, DQX – Databricks Data Quality Framework, dbt tests)
  • Proficiency in Python and strong SQL skills
  • Experience with AI-assisted development tools (e.g., Claude Code, Copilot)
  • Strong understanding of prompt engineering and reusable AI workflows
  • Experience validating data ingestion from APIs, relational databases, and flat files
  • Deep understanding of data transformation validation and Medallion Architecture
  • Experience integrating testing into CI/CD pipelines (Azure DevOps, GitHub Actions, etc.)
  • Familiarity with data observability and monitoring practices
  • Strong organizational skills
  • Strong attention to detail
  • Good judgment
  • Strong interpersonal communication skills
  • Strong analytical and problem-solving skills
  • Able to work harmoniously and effectively with others
  • Able to preserve confidentiality and exercise discretion
  • Able to work under pressure
  • Able to manage multiple projects with competing deadlines and priorities

Nice To Haves

  • Experience with Azure Databricks and Apache Spark (PySpark)
  • Familiarity with Delta Lake and Unity Catalog
  • Experience building reusable AI prompt libraries or skills (.md or similar formats)
  • Hands-on experience using Claude Code for automation or framework generation
  • Knowledge of data contracts and schema evolution strategies
  • Experience with test data management and synthetic data generation
  • Familiarity with infrastructure as code (Terraform, Bicep)
  • Experience with streaming data pipelines (Kafka, Event Hubs)

Responsibilities

  • Design and implement scalable end-to-end testing frameworks for data pipelines
  • Validate ingestion from APIs, SQL Server, and flat files
  • Ensure data quality across Medallion architecture layers (Bronze, Silver, Gold)
  • Build automated checks for schema validation, data integrity, and transformations
  • Develop reusable validation patterns using Great Expectations or similar frameworks
  • Leverage Claude Code and prompt engineering to accelerate test generation and standardization
  • Create reusable AI-driven testing assets (.md skills, templates) and workflows for AI-assisted coding and testing
  • Integrate testing into CI/CD pipelines (Azure DevOps, GitHub Actions)
  • Collaborate with data engineers, Product, and DevOps to define data quality standards and acceptance criteria
  • Monitor and improve data reliability, observability, and test coverage
  • Investigate data quality issues and drive root-cause resolution

Benefits

  • bonus eligibility
  • comprehensive benefits program
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service