Data Engineer

Booz Allen HamiltonMcLean, VA
Remote

About The Position

Data is only as powerful as the pipelines that move it, the structure that organizes it, and the trust that supports it. As a Data Engineer, you’ll help build and scale a cloud-native data platform that enables mission-critical insights across the enterprise. In this role, you’ll design, build, and optimize data pipelines that ingest, transform, and deliver data from a wide range of internal and external sources. You’ll work within a modern AWS-based data ecosystem to support analytics, APIs, and AI/ML use cases ensuring data is reliable, secure, and accessible to those who need it. You’ll collaborate closely with cloud engineers, API developers, data analysts, and mission stakeholders to develop reusable data products and scalable ingestion frameworks. From batch ingestion to near real-time processing, your work will directly support national security missions by turning raw data into actionable intelligence. Work with us to build the backbone of enterprise data. Join us. The world can’t wait.

Requirements

  • 3+ years of experience in data engineering, data integration, or backend data development
  • Experience designing, building, and maintaining ETL/ELT data pipelines
  • Experience in SQL and Python
  • Experience ingesting and integrating data from multiple sources such as APIs, flat files, and databases
  • Experience with cloud-based data platforms, including AWS, and distributed data processing
  • Experience working in Agile development environments and collaborating with cross-functional teams
  • Ability to translate technical data concepts for stakeholders
  • Top Secret clearance
  • Bachelor's degree in Computer Science, Data Science, or Engineering

Nice To Haves

  • Experience with AWS services such as Amazon S3, AWS Glue, Amazon Athena, AWS Lambda, or Amazon Redshift
  • Experience implementing data validation, quality checks, and monitoring within pipelines
  • Experience with data lake or lakehouse architectures and columnar storage formats such as Parquet
  • Experience with big data frameworks such as Apache Spark and supporting data platforms, analytics environments, or AI/ML pipelines
  • Experience with DevSecOps practices and tools, including GitLab CI/CD or Terraform
  • Knowledge of data modeling concepts and data pipeline architectures such as medallion or layered approach
  • Knowledge of API-driven data access and event-driven integration patterns
  • Ability to optimize data workflows for performance, scalability, and reliability
  • Possession of strong problem-solving skills
  • IAT Level II, CompTIA Security+, Computing Environment, or AWS CCP Certification

Responsibilities

  • Design, build, and optimize data pipelines that ingest, transform, and deliver data from a wide range of internal and external sources.
  • Work within a modern AWS-based data ecosystem to support analytics, APIs, and AI/ML use cases ensuring data is reliable, secure, and accessible to those who need it.
  • Collaborate closely with cloud engineers, API developers, data analysts, and mission stakeholders to develop reusable data products and scalable ingestion frameworks.

Benefits

  • health, life, disability, financial, and retirement benefits
  • paid leave
  • professional development
  • tuition assistance
  • work-life programs
  • dependent care
  • recognition awards program
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service