Senior Data Engineer (Remote)

Parsons Corporation
7dRemote

About The Position

Parsons is looking for an amazingly talented Senior Data Engineer to join our team! In this role you will get to help shape our modern data architecture and enabling scalable, self-service analytics across the organization.

Requirements

  • Strong hands-on experience with T-SQL and Python.
  • Experience with comprehensive data conversion projects is preferred (ERP systems including Oracle Cloud ERP and/or SAP S4/HANA)
  • Experience with Relational Database systems
  • Experience with both on-premises / cloud ETL toolsets (preferably SSIS, ADF, Synapse, AWS)
  • Familiar with multi-dimensional and tabular models
  • 5+ years of experience in data engineering, data architecture, or data platform development.
  • Proficiency in PySpark and SQL notebooks (e.g., Microsoft Fabric, Databricks, Synapse, or similar).
  • Experience with Azure Data Factory and/or Informatica for building scalable ingestion pipelines.
  • Deep understanding of lakehouse architecture and medallion design patterns.
  • Experience with dbt, GitHub source control, branching strategies, and CI/CD pipelines.
  • Familiarity with data ingestion from APIs, SQL Server, and flat files into Parquet/Delta formats.
  • Strong problem-solving skills and ability to work independently in a fast-paced environment.
  • US person

Nice To Haves

  • Experience with data governance, security, and compliance (e.g., SOX, HIPAA).
  • Snowflake, Azure Data Engineer, dbt, and/or Databricks certifications
  • Exposure to real-time data processing and streaming technologies (e.g., Kafka, Spark Streaming).
  • Familiarity with data observability tools and automated testing frameworks for pipelines.
  • Bachelor's or Master’s degree in Computer Science, Information Systems, or a related field

Responsibilities

  • Design and implement scalable, efficient data ingestion pipelines using ADF, Informatica, and parameterized notebooks to support bronze-silver-gold (medallion) architecture.
  • Develop robust ETL/ELT workflows to ingest data from diverse sources (e.g., SQL Server, flat files, APIs) into Parquet/Delta formats and model them into semantic layers in Snowflake.
  • Build and maintain incremental and CDC-based pipelines to support near-real-time and daily batch processing.
  • Apply best practices for Snowflake implementation, including performance tuning, cost optimization, and secure data sharing.
  • Leverage dbt for data transformation and modeling, and implement GitHub-based source control, branching strategies, and CI/CD pipelines for deployment automation.
  • Ensure data quality, reliability, and observability through validation frameworks and self-healing mechanisms.
  • Collaborate with data analysts, data scientists, and business stakeholders to deliver clean, trusted, and accessible data.
  • Mentor junior engineers and contribute to a culture of engineering excellence and continuous improvement.

Benefits

  • medical
  • dental
  • vision
  • paid time off
  • Employee Stock Ownership Plan (ESOP)
  • 401(k)
  • life insurance
  • flexible work schedules
  • holidays
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service