Mid-Level Data Engineer

Nakupuna CompaniesArlington, VA
1dHybrid

About The Position

Overview Nakupuna Prime is looking for a Data Engineer to design, build, and maintain data pipelines and data warehouse infrastructure to ensure reliable, scalable, and secure data integration that supports reporting and analytics. This is a mid-level role expected to execute data integration work independently and escalate complex issues to senior engineering as needed. This effort supports the United States Naval Community College (USNCC).

Requirements

  • Hands-on experience designing and maintaining ETL/ELT pipelines using Python and SQL.
  • Working knowledge of data warehouse concepts, schema design, and performance optimization.
  • Experience implementing data quality checks, logging, and monitoring to detect pipeline failures and data anomalies.
  • Ability to troubleshoot and remediate data discrepancies and integration issues across multiple source systems.
  • Experience collaborating with BI developers and analysts to validate transformations and definitions.
  • This position requires a bachelor’s degree in Engineering, Computer Science, or equivalent.
  • This position requires access to military installations.
  • Must be able to qualify for and obtain base access and pass a background check.
  • Must be a U.S. citizen.
  • This position is hybrid. Work will be done on site at least three (3) days a week in Arlington, VA. Remote candidates will be considered.
  • Ability to perform repetitive motions with the hands, wrists, and fingers.
  • Ability to engage in and follow audible communications in emergency situations.
  • Ability to sit for prolonged periods at a desk and working on a computer.
  • Ability to walk for extended periods throughout the work day including moving between buildings and covering long distances by foot.

Nice To Haves

  • 5+ years of data engineering experience designing ETL/ELT pipelines and data warehouse architecture
  • Advanced SQL skills (PostgreSQL, Redshift)
  • Experience with Python-based ETL development and automation
  • Familiarity with Salesforce data models is preferred
  • Experience implementing monitoring, logging, and data quality checks
  • Occasional travel may be required.

Responsibilities

  • Design, build, and maintain ETL/ELT pipelines using Python and SQL to ingest data from Salesforce Education Cloud, flat files, and API endpoints into the data warehouse.
  • Implement robust error handling, logging, and automated monitoring to detect and track data quality issues.
  • Define schemas and curate unified student datasets with targeted completeness and accuracy for downstream reporting and modeling.
  • Optimize AWS Redshift and PostgreSQL performance through tuning, query optimization, and storage management.
  • Serve as escalation point for complex data discrepancies and pipeline failures; provide root-cause analysis and permanent fixes.
  • Coordinate with analysts and BI developers to validate transformations and resolve mismatches.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service