Senior IT Engineer

The Britton GroupWashington, DC
Onsite

About The Position

This position supports a mission-critical data engineering initiative focused on building, optimizing, and sustaining enterprise data pipelines within a large-scale distributed data environment. The selected candidate will play a key role in enabling data-driven decision-making by ensuring the availability, integrity, and performance of critical data assets. We are seeking a Senior IT Data Engineer with strong experience in big data platforms, data pipeline development, and distributed processing frameworks. This role is ideal for engineers who thrive in complex data ecosystems, understand end-to-end data lifecycle management, and can build scalable solutions that support enterprise analytics and operational needs. You will be responsible for designing and maintaining robust data pipelines, integrating data from multiple sources, and ensuring high data quality and reliability across the platform.

Requirements

  • Minimum of 5+ years of experience in data engineering, application development, or related roles.
  • At least 5 years of experience in Python or application/data development
  • At least 5 years of experience with data ingestion tools such as Apache NiFi
  • Advanced knowledge of SQL and distributed data processing frameworks
  • Experience working in Agile environments (Scrum or Kanban)
  • Experience supporting CI/CD pipelines and data platform operations
  • Strong experience with Cloudera Data Platform
  • Strong experience with Apache NiFi
  • Strong experience with Hadoop Ecosystem (MapReduce, Hive, HBase)
  • Strong experience with Apache Spark / PySpark
  • Strong experience with Kafka
  • Strong experience with Python, SQL, Java
  • Strong experience with UNIX/Linux (shell scripting)
  • Strong experience with Git (version control)
  • Strong experience with CI/CD tools and DevOps workflows
  • U.S. Citizenship required

Nice To Haves

  • Experience with data transformation frameworks such as PySpark, pandas, or dbt
  • Experience implementing CI/CD pipelines for data engineering workflows
  • Familiarity with data governance, data lifecycle management, and data protection practices
  • Experience working with real-time or streaming data architectures
  • Exposure to cloud-based data platforms or hybrid data environments
  • Experience supporting federal or regulated environments
  • Microsoft SQL Server (preferred)

Responsibilities

  • Designing, developing, and maintaining data pipelines within distributed data environments such as Cloudera Data Platform
  • Building ETL/ELT workflows to ingest, cleanse, transform, and aggregate structured and unstructured data
  • Working with large-scale data processing frameworks including Hadoop, Spark, Hive, HBase, and Kafka
  • Developing data solutions using Python, SQL, and Java
  • Utilizing data integration and ingestion tools such as Apache NiFi
  • Performing data quality validation, monitoring, and performance tuning across data pipelines
  • Supporting long-term operations, maintenance, and optimization of enterprise data platforms
  • Implementing version-controlled, code-based data solutions using Git and DevOps best practices
  • Collaborating within Agile environments using Scrum or Kanban methodologies
  • Working in UNIX/Linux environments, including shell scripting and command-line operations
  • Design scalable, high-performance data pipelines across distributed systems
  • Ensure data accuracy, consistency, and adherence to data quality standards
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service