Senior Data Engineer

QodeOhio, OH
Hybrid

About The Position

This position is for a Senior Data Engineer specializing in Informatica and PySpark. The role requires 8-10 years of experience in Data Engineering and Data Analysis, focusing on ETL design, development, and optimization using Informatica PowerCenter/IDQ, and large-scale data processing with PySpark. The engineer will work with Hadoop technologies, Python, and Kafka for data pipelines, and be involved in the full ETL lifecycle from extraction to loading. The role also involves Agile project delivery using Jira and requires strong client interaction and leadership skills.

Requirements

  • 8–10 years in Data Engineering and Data Analysis
  • Strong hands-on experience in Informatica PowerCenter/IDQ for ETL design, development, and optimization
  • Advanced skills in PySpark for large-scale data processing, transformation, and analytics
  • Solid working knowledge of Hadoop technologies (HDFS, Hive, Sqoop, MapReduce)
  • Proficiency in Python and Kafka for streaming and batch data pipelines
  • Strong understanding of database concepts, data design, data modeling, and ETL workflows
  • Experience in analyzing, designing, and coding ETL programs including data extraction, ingestion, quality checks, normalization, and loading
  • Hands-on experience with Agile methodology and Jira for project delivery
  • Proven ability in client-facing roles with strong communication and leadership skills to coordinate across SDLC
  • Master’s or Bachelor’s degree in Computer Science or related field
  • Strong problem-solving skills
  • Ability to work in cross-functional teams

Nice To Haves

  • Exposure to AWS data components and analytics
  • Familiarity with machine learning models and AI concepts
  • Experience with data modeling tools such as Erwin

Responsibilities

  • Design, develop, and optimize ETL using Informatica PowerCenter/IDQ
  • Process, transform, and analyze large-scale data using PySpark
  • Work with Hadoop technologies (HDFS, Hive, Sqoop, MapReduce)
  • Develop streaming and batch data pipelines using Python and Kafka
  • Analyze, design, and code ETL programs including data extraction, ingestion, quality checks, normalization, and loading
  • Coordinate across SDLC in client-facing roles
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service