Parsons Corporation-posted 3 months ago
$100,900 - $176,600/Yr
Full-time • Senior
5,001-10,000 employees

Parsons is seeking a highly skilled and experienced Senior Data Engineer to join our enterprise data team. This senior-level role is pivotal in shaping our modern data architecture and enabling scalable, self-service analytics across the organization. The ideal candidate will have deep expertise in Snowflake, Azure Data Factory (ADF) and/or Informatica, and a strong understanding of lakehouse medallion architecture to support both batch and near-real-time data processing.

  • Design and implement scalable, efficient data ingestion pipelines using ADF, Informatica, and parameterized notebooks to support bronze-silver-gold (medallion) architecture.
  • Develop robust ETL/ELT workflows to ingest data from diverse sources (e.g., SQL Server, flat files, APIs) into Parquet/Delta formats and model them into semantic layers in Snowflake.
  • Build and maintain incremental and CDC-based pipelines to support near-real-time and daily batch processing.
  • Apply best practices for Snowflake implementation, including performance tuning, cost optimization, and secure data sharing.
  • Leverage dbt for data transformation and modeling, and implement GitHub-based source control, branching strategies, and CI/CD pipelines for deployment automation.
  • Ensure data quality, reliability, and observability through validation frameworks and self-healing mechanisms.
  • Collaborate with data analysts, data scientists, and business stakeholders to deliver clean, trusted, and accessible data.
  • Mentor junior engineers and contribute to a culture of engineering excellence and continuous improvement.
  • 5+ years of experience in data engineering, data architecture, or data platform development.
  • Strong hands-on experience with Snowflake, SQL, and Python.
  • Proficiency in PySpark and SQL notebooks (e.g., Microsoft Fabric, Databricks, Synapse, or similar).
  • Experience with Azure Data Factory and/or Informatica for building scalable ingestion pipelines.
  • Deep understanding of lakehouse architecture and medallion design patterns.
  • Experience with dbt, GitHub source control, branching strategies, and CI/CD pipelines.
  • Familiarity with data ingestion from APIs, SQL Server, and flat files into Parquet/Delta formats.
  • Strong problem-solving skills and ability to work independently in a fast-paced environment.
  • Experience with data governance, security, and compliance (e.g., SOX, HIPAA).
  • Snowflake, Databricks, and/or Azure Data Engineer certifications.
  • Exposure to real-time data processing and streaming technologies (e.g., Kafka, Spark Streaming).
  • Familiarity with data observability tools and automated testing frameworks for pipelines.
  • Bachelor's or Master’s degree in Computer Science, Information Systems, or a related field.
  • Medical, dental, vision insurance.
  • Paid time off.
  • Employee Stock Ownership Plan (ESOP).
  • 401(k).
  • Life insurance.
  • Flexible work schedules.
  • Holidays to fit your busy lifestyle.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service