Data Engineer (onsite)

Vitaver & AssociatesOklahoma City, OK
Onsite

About The Position

This is a temporary project for a Data Engineer in Oklahoma City, OK, with an estimated duration of 6-12 months and possible extensions. The role requires 100% of the time to be spent at the Client's site, with no telecommuting or remote work. Candidates must be able to relocate as required. The responsibilities include designing, implementing, and maintaining ELT/ETL pipelines across cloud platforms, architecting and modernizing data acquisition and ingestion pipelines for large-scale healthcare data, implementing and managing data storage solutions, planning and executing data migrations, designing and architecting schemas for data warehouse environments, evaluating and integrating new data sources, and documenting data architectures and lineage.

Requirements

  • Availability to work 100% of the time at the Client's site in Oklahoma City, OK
  • Hands-on Data Engineering experience (5+ years)
  • Experience with SQL including correlated subqueries and window functions
  • Experience with cloud platforms (GCP Big Query) including query performance optimization with partitioning strategies
  • Experience with Python for data pipeline development, automation, and API integration including pandas, NumPy, and SQL Alchemy
  • Experience with ETL transformation tools such as Azure Data Factory, GCP Dataproc, Dataflow, SSIS, and dbt
  • Experience with orchestration tools
  • Experience with REST APIs for data ingestion and system interoperability
  • Experience with version control (Git)
  • Experience with R for statistical analysis and data manipulation
  • Experience with Python ML Libraries

Nice To Haves

  • Experience in a HIPAA-regulated environment with data privacy and security requirements
  • Experience with standards-based health data exchange (HL7 v2/v3, FHIR)
  • Experience with cell suppression and statistical disclosure logic within SQL for public-facing health data outputs
  • Experience with SAS

Responsibilities

  • Design, implement, and maintain ELT/ETL pipelines across cloud platforms (Azure, GCP, AWS)
  • Architect and modernize data acquisition and ingestion pipelines for large-scale healthcare data
  • Implement and manage data storage solutions (data lakes, warehouses) utilizing appropriate partitioning, security, and lifecycle policies
  • Plan and execute data migrations across platforms including schema mapping, data validation, and cutover coordination
  • Design and architect schemas to support migration of transactional database structures to data warehouse environments including dimensional modeling
  • Evaluate and integrate new and emerging data sources, link datasets across systems, and develop processes to support novel data types
  • Document data architectures, lineage, and standards and provide technical guidance and mentorship
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service