Data Engineer (Data Operations)

IQVIARosemont, IL

About The Position

The Data Engineer (Data Operations) is responsible for building, maintaining, and optimizing downstream data pipelines and analytics-ready datasets that power business intelligence and reporting. This role focuses on data troubleshooting, transformation, and delivery, ensuring that upstream data is validated and converted into reliable, curated data sources for BI and analytics teams. The individual will work hands-on with Snowflake, SQL, and Python to develop data models, resolve data issues, and support production data operations, enabling the BI team to efficiently consume high-quality data through their visualization tools. Experience with R is a plus, as we transition from R into new technologies.

Requirements

  • Bachelor’s degree
  • 7+ years of experience in data engineering, data operations, or backend data development
  • Strong hands-on experience with: Snowflake (data warehouse design, optimization, and management)
  • Strong hands-on experience with: SQL (advanced querying, performance tuning, and data transformation)
  • Strong hands-on experience with: Python (data processing, scripting, and automation)
  • Proven experience building and supporting production-grade data pipelines and workflows
  • Strong experience with ETL/ELT tools, orchestration frameworks (e.g., Airflow, Atomic), or similar pipeline management tools
  • Experience troubleshooting data issues, pipeline failures, and data inconsistencies, with strong root cause analysis skills
  • Solid understanding of data modeling, data warehousing concepts, and data lifecycle management
  • Familiarity with CI/CD, version control (Git), and production deployment practices
  • Strong problem-solving skills and ability to work independently in fast-paced environments
  • Excellent communication skills with the ability to explain data issues and solutions to both technical and non-technical stakeholders

Nice To Haves

  • Experience with R is a plus, as we transition from R into new technologies.
  • Experience working with healthcare, pharmaceutical, or life sciences data strongly preferred

Responsibilities

  • Build and maintain curated data layers and data sources
  • Develop and optimize SQL- and Python-based transformations in Snowflake
  • Troubleshoot data quality issues originating from upstream pipelines, including identifying root causes and coordinating fixes where needed
  • Validate incoming datasets and ensure accuracy, completeness, and consistency before they are surfaced to downstream users
  • Design and manage analytics-ready data models and tables to support reporting and dashboards
  • Serve as a key point of contact for reporting data layers ensuring data sources meet usability and performance needs
  • Monitor data pipelines and datasets for failures, anomalies, and performance issues, and proactively resolve them
  • Support production operations, including incident management, backlog prioritization, and SLA adherence
  • Partner with upstream engineering teams to escalate and resolve ingestion-related issues, while not owning ingestion directly
  • Improve processes for data validation, monitoring, and operational efficiency

Benefits

  • range of health and welfare and/or other benefits
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service