IBM DataStage Developer / Engineer – Job Description - Remote

Novalink Solutions LLCLincoln, NE
Remote

About The Position

IBM DataStage Developer / Engineer – Job Description Job Summary: · We are seeking an experienced IBM DataStage Developer/Engineer to design, develop, and maintain IBM InfoSphere DataStage jobs supporting the Medicaid Data Management Platform. · This role focuses on building robust, scalable data pipelines that enable accurate and high-performance reporting within a Medicaid/MMIS data environment. · The ideal candidate will work closely with data architects, analysts, and business stakeholders to support data integration, transformation, and data quality initiatives.

Requirements

  • Bachelor’s degree in Computer Science, Information Systems, or a related field.
  • 5–8+ years of experience in ETL development with IBM InfoSphere DataStage.
  • Experience with Unix/Linux and shell scripting
  • Strong experience in Medicaid/MMIS data environments.
  • Proficiency in SQL and relational databases (DB2, Oracle, SQL Server, etc.).
  • Experience working with large-scale data warehouses and data marts.
  • Knowledge of data modeling concepts and ETL design patterns.
  • Experience integrating data from mainframe and on-premise systems.
  • Strong analytical, troubleshooting, and problem-solving skills.
  • Excellent communication and collaboration skills.

Nice To Haves

  • Experience with cloud platforms such as AWS or Azure (S3, Redshift, etc.).
  • Familiarity with big data technologies and modern data platforms.
  • Experience with scheduling/orchestration tools (Control-M, Airflow, etc.).
  • Knowledge of healthcare data standards and regulatory requirements (HIPAA).
  • Experience supporting reporting and analytics platforms.
  • IBM DataStage certification is a plus.

Responsibilities

  • Design, develop, and maintain ETL jobs using IBM InfoSphere DataStage.
  • Build and optimize data pipelines for Medicaid/MMIS data processing and reporting.
  • Extract, transform, and load data from multiple sources including mainframe, relational databases, and external systems.
  • Collaborate with data architects to implement scalable and reusable ETL frameworks.
  • Perform data cleansing, validation, and transformation to ensure data quality and integrity.
  • Optimize job performance and troubleshoot ETL failures and data issues.
  • Work with large datasets including claims, eligibility, provider, and reference data.
  • Support data integration between legacy systems, data warehouses, and cloud environments.
  • Develop and maintain documentation for ETL processes and data mappings.
  • Participate in testing, deployment, and production support activities.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service