Senior ETL Developer

Tria Federal
15h

About The Position

Softrams is seeking a strong Senior ETL Developer to join our collaborative and agile team delivering modern, mission-critical data solutions for U.S. federal government health IT programs. This role is central to designing, developing, and maintaining the data pipelines that power our analytics and reporting capabilities — extracting data from diverse sources, transforming it into consistent, reliable formats, and loading it into data warehouses and target systems. Operating within a SAFe Agile framework, the ETL Developer will work alongside data engineers, application developers, and business stakeholders to solve complex data challenges across large-scale Medicaid and Medicare programs. This role is ideal for a data professional who thrives in ambiguity, takes initiative in investigating and dissecting complex datasets, and is ready to own end-to-end pipeline development while contributing to broader data architecture decisions across a cloud-native AWS ecosystem.

Requirements

  • 8+ years ETL / Data Pipeline Development (End-to-End)
  • 5 years of experience with Python & PySpark
  • Experience with AWS Glue & Data Lake Architecture (S3, EMR, LakeFormation)
  • 5+ years of strong PL/pgSQL or PL/SQL experience
  • 6+ years of experience with complex SQL — Data Extraction & Manipulation
  • 3+ years of experience with workflow Orchestration (Airflow, Prefect, NiFi or equivalent)
  • 3+ yeasrs of experience Cloud Data Warehousing (Snowflake, Redshift, or equivalent)
  • 3+ yeasrs of experience working in Agile/Scrum large-scale programs
  • Excellent written, verbal, and interpersonal communication skills
  • Strong time management skills with the ability to quickly triage and escalate issues
  • Strong attention to detail and a focus on task completion

Responsibilities

  • Analyze and understand complex business and engineering challenges to design effective data pipeline solutions
  • Design, develop, and maintain end-to-end ETL/ELT data pipelines using Python, PySpark, and AWS Glue
  • Build and manage data validation pipelines using Great Expectations or equivalent frameworks
  • Implement and maintain workflow orchestration using Apache Airflow, Prefect, Apache NiFi, or equivalent tools
  • Architect and operate Data Lake solutions in AWS, including S3, Glue, EMR, and Lake Formation
  • Write complex SQL queries for data extraction, transformation, and manipulation across multiple database systems
  • Collaborate with cross-functional teams including data engineers, application developers, and business stakeholders
  • Ensure data quality, reliability, and integrity across all pipeline stages
  • Contribute to data architecture decisions and promote best practices across the team
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service