Senior ETL Developer

CGIPittsburgh, PA
1d$58,800 - $156,700Onsite

About The Position

This role can be filled from the client site in one of the following locations: Pittsburgh, PA, Dallas, TX, or Cleveland, OH. Join a large-scale, mission-critical data modernization program at a major U.S. bank, where you will work at the intersection of enterprise data platforms, mainframe systems, and downstream business applications. Your future duties and responsibilities: This role is ideal for a senior engineer who: . Enjoys deep, complex data challenges . Has strong ETL experience with Informatica Power Center . Wants to work on high-volume, regulated, business-critical data . Values stability, scale, and long-term impact over “one-off” projects You will be part of a program that: . Processes millions of records daily . Supports regulatory, risk, and customer-facing use cases . Is modernizing legacy mainframe pipelines into scalable enterprise data flows What You Will Work On . Enterprise-scale ETL pipelines sourcing from mainframe systems . Informatica PowerCenter-based batch processing . File-based ingestion, reconciliation, and delta processing . Data quality, controls, and auditability in a regulated environment . Close collaboration with architecture, QA, and downstream consumers Required qualifications to be successful in this role: Future duties and responsibilities . Design, develop, and maintain Informatica PowerCenter mappings, workflows, and sessions . Build high-performance ETL pipelines processing large, complex datasets . Implement full-file and delta-based data processing patterns . Optimize jobs for performance, scalability, and reliability . Work with mainframe-generated files (COBOL copybooks, VSAM, flat files) . Interpret and validate mainframe data structures . Collaborate with mainframe teams on extract design and scheduling . Troubleshoot data issues across distributed and mainframe environments . Implement data validation, reconciliation, and control totals . Support audit and compliance requirements . Ensure data lineage and traceability . Build restartable and recoverable batch processes . Participate in batch monitoring and incident resolution . Perform root-cause analysis for data issues and job failures . Support controlled deployments and releases . Improve resiliency and reduce operational risk . Contribute to ETL standards, patterns, and best practices . Proactively identify opportunities to simplify and modernize pipelines Required qualifications to be successful in this role . 8+ years of Informatica PowerCenter development . Strong experience with mainframe data sources o COBOL copybooks o Flat files / VSAM . Advanced SQL and relational database knowledge . Strong understanding of Big Data platforms (Hive, Impala) . Deep knowledge about databases (Oracle, DB2, Teradata) . Experience in banking or financial services . Strong understanding of batch scheduling and dependencies . Experience with large-volume, performance-sensitive ETL jobs

Requirements

  • 8+ years of Informatica PowerCenter development
  • Strong experience with mainframe data sources o COBOL copybooks o Flat files / VSAM
  • Advanced SQL and relational database knowledge
  • Strong understanding of Big Data platforms (Hive, Impala)
  • Deep knowledge about databases (Oracle, DB2, Teradata)
  • Experience in banking or financial services
  • Strong understanding of batch scheduling and dependencies
  • Experience with large-volume, performance-sensitive ETL jobs

Nice To Haves

  • CA-7 or similar batch schedulers
  • Unix / Linux scripting
  • Data reconciliation & audit frameworks
  • Exposure to data warehouse or ODS environments
  • Experience modernizing legacy ETL pipelines

Responsibilities

  • Design, develop, and maintain Informatica PowerCenter mappings, workflows, and sessions
  • Build high-performance ETL pipelines processing large, complex datasets
  • Implement full-file and delta-based data processing patterns
  • Optimize jobs for performance, scalability, and reliability
  • Work with mainframe-generated files (COBOL copybooks, VSAM, flat files)
  • Interpret and validate mainframe data structures
  • Collaborate with mainframe teams on extract design and scheduling
  • Troubleshoot data issues across distributed and mainframe environments
  • Implement data validation, reconciliation, and control totals
  • Support audit and compliance requirements
  • Ensure data lineage and traceability
  • Build restartable and recoverable batch processes
  • Participate in batch monitoring and incident resolution
  • Perform root-cause analysis for data issues and job failures
  • Support controlled deployments and releases
  • Improve resiliency and reduce operational risk
  • Contribute to ETL standards, patterns, and best practices
  • Proactively identify opportunities to simplify and modernize pipelines

Benefits

  • Competitive compensation
  • Comprehensive insurance options
  • Matching contributions through the 401(k) plan and the share purchase plan
  • Paid time off for vacation, holidays, and sick time
  • Paid parental leave
  • Learning opportunities and tuition assistance
  • Wellness and Well-being programs

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service