Mid-Level Data Engineer

Fantom CorporationChantilly, VA
1d

About The Position

Fantom Corporation is a mission-focused organization supporting critical programs across the defense and intelligence community. We partner with our customers to deliver high-impact technical solutions while fostering a culture built on trust, expertise, and long-term career growth. We are seeking a motivated and detail-oriented Data Engineer to support the design, development, and maintenance of data pipelines that enable reliable data processing and analytics. In this role, you will collaborate closely with software engineers and technical teams to build data solutions that help transform large datasets into actionable insights. The ideal candidate will have experience developing data pipelines, working with relational databases, and supporting cloud-based data environments.

Requirements

  • Active TS/SCI clearance with Polygraph
  • Master’s degree in Computer Science, Information Systems, Engineering, or a related technical discipline (or equivalent experience)
  • At least 4 years of experience developing or maintaining data pipelines within development teams
  • Minimum of 2 years of hands-on experience with Python
  • Experience working with AWS cloud services
  • Experience working with relational databases such as PostgreSQL, Oracle, or MySQL
  • Experience writing and maintaining SQL queries
  • At least 2 years of experience working in Linux environments, including shell scripting
  • Experience preparing and managing both structured and unstructured datasets, including JSON formats

Nice To Haves

  • Strong communication and collaboration skills
  • Experience using workflow orchestration tools such as Apache NiFi or Apache Airflow
  • Development experience with Java or Scala
  • Experience supporting or managing EMR or SPARC clusters
  • Experience optimizing performance of data pipelines or large-scale data systems
  • Experience working with Hive or Iceberg
  • Familiarity with cloud security practices
  • Experience with automated code deployment processes

Responsibilities

  • Design, develop, and maintain data pipelines that support ingestion, transformation, and delivery of large datasets
  • Collaborate with software development teams to integrate data solutions into existing applications and systems
  • Write and optimize SQL queries to support data extraction, transformation, and analysis
  • Manage and support relational databases including PostgreSQL, Oracle, and MySQL
  • Develop scripts and automation tools using Python and Linux shell scripting to support data processing workflows
  • Support cloud-based data infrastructure and workflows within AWS environments
  • Prepare and structure data outputs in formats suitable for analytics and downstream systems, including JSON
  • Work with structured and unstructured datasets to support enterprise data processing initiatives
  • Ensure reliability and performance of data pipelines and data systems
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service