Data Engineer

InductiveHealth
15dRemote

About The Position

InductiveHealth has an opening for a highly skilled, detail-oriented Data Engineer to join a collaborative team in the Public Health space. In this role, you will own the modernization and reliability of data integration and ETL processes, with a primary focus on transitioning legacy SAS-based workflows to SQL Server. You will partner closely with product, engineering, and support teams to ensure data pipelines are performant, accurate, and scalable, while documenting legacy logic and strengthening long-term data operations through improved automation, monitoring, and standards.

Requirements

  • Strong hands-on experience with SQL Server development, including advanced T-SQL, query optimization, and performance tuning in production environments.
  • Demonstrated experience designing, maintaining, and modernizing data pipelines and ETL processes, particularly in environments transitioning from legacy architectures to more scalable, maintainable data platforms.
  • Ability to analyze, interpret, and translate legacy data transformation logic (including SAS-based workflows) into modern, SQL-based implementations, with an emphasis on clarity, performance, and long-term maintainability.
  • Experience with SQL Server–based data integration tooling or comparable modern data orchestration frameworks, including support for incremental processing, dependency management, and multi-step pipelines.
  • Familiarity with modern data engineering concepts such as idempotent pipelines, incremental ingestion patterns, schema evolution, and environment-aware deployments.
  • Comfort working within complex, partially undocumented systems and progressively improving them through refactoring, documentation, and automation.
  • Experience supporting and operating production data pipelines, including diagnosing failures, resolving data quality issues, and partnering cross-functionally to restore and improve system reliability.
  • Experience managing data workflows across multiple environments (development, scale, production) with attention to consistency, validation, and release coordination.
  • Strong problem-solving skills with a systems-level mindset, particularly when identifying root causes of performance, scalability, or data integrity issues.
  • Ability to work collaboratively with engineering, product, and support teams while maintaining clear ownership of data platform outcomes.
  • Clear written and verbal communication skills, especially when documenting technical systems and explaining data flows to both technical and non-technical stakeholders.

Nice To Haves

  • Prior experience working with public health data systems, including familiarity with HL7 or similar healthcare data standards.
  • Hands-on experience migrating large, long-lived ETL systems from legacy technologies to SQL Server–based architectures.
  • Deep understanding of ETL performance optimization at scale, including parallel processing and high-volume data loads.
  • Experience designing or improving incremental data ingestion strategies in systems that historically relied on full refreshes.
  • Demonstrated ability to bring structure to undocumented or tribal-knowledge-heavy systems through clear documentation and process improvement.
  • Experience implementing robust logging, monitoring, and alerting for data pipelines.
  • Comfort balancing project-based migration work with ongoing production support responsibilities.
  • Ability to proactively identify risks in data workflows and address them before they impact downstream systems or customers.
  • Experience serving as a technical owner or go-to expert for critical data infrastructure.

Responsibilities

  • Lead the continued transition of legacy SAS-based ETL processes to SQL Server, completing remaining migrations and validating results through parallel processing and data reconciliation.
  • Translate undocumented or minimally documented legacy ETL logic into maintainable, fault tolerant SQL Server and SSIS workflows.
  • Improve and standardize incremental data processing patterns, reducing reliance on full data refreshes and destructive reload processes.
  • Own the reliability and performance of ETL pipelines by identifying and resolving bottlenecks, particularly in high-volume and performance-sensitive workflows.
  • Investigate and correct data flow issues that prevent records from consistently reaching downstream systems across environments.
  • Support production data operations by partnering with product, engineering, and support teams to triage and resolve data-related issues and support tickets.
  • Participate in regular operational check-ins and serve as a primary escalation point for ETL and data pipeline concerns.
  • Document ETL logic, dependencies, and operational processes to reduce institutional knowledge risk and improve long-term maintainability.
  • Introduce improved logging, monitoring, automation, and repeatability across data integration workflows.
  • Collaborate with engineering peers and domain experts to establish clearer ownership and standards for ETL and data pipeline practices.

Benefits

  • Virtual first, remote organization and culture
  • Flexible Paid Time Off (PTO)
  • 401(k) retirement plan with corporate matching
  • Medical, prescription, vision, and dental coverage (multiple plans based on your needs)
  • Short Term and Long Term Disability (for employee)
  • Life Insurance (for employee)
  • New Team Member support for home office setup

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

11-50 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service