Sr Data Engineer

Fantom CorporationChantilly, VA
10d

About The Position

Fantom Corporation is a mission-focused organization supporting critical programs across the defense and intelligence community. We partner with our customers to deliver high-impact technical solutions while fostering a culture built on trust, expertise, and long-term career growth. We are seeking an experienced Data Engineer to design, develop, and maintain scalable data pipelines that support large-scale data processing and analytics initiatives. In this role, you will collaborate closely with software development and engineering teams to integrate data solutions into enterprise systems and applications. The successful candidate will help ensure data is properly ingested, transformed, and delivered in formats that enable teams to derive actionable insights.

Requirements

  • Active TS/SCI clearance with Polygraph
  • Master’s degree in Computer Science, Information Systems, Engineering, or related technical discipline (or equivalent experience)
  • At least 8 years of experience developing or maintaining data pipelines in collaboration with development teams
  • Minimum of 4 years of hands-on experience with Python development
  • Experience working with AWS cloud services
  • Experience working with relational databases such as PostgreSQL, Oracle, or MySQL
  • Strong experience writing and maintaining SQL queries
  • At least 4 years of experience working in Linux environments, including shell scripting
  • Experience handling structured and unstructured data formats, including JSON

Nice To Haves

  • Strong communication and collaboration skills
  • Experience with workflow orchestration tools such as Apache NiFi or Apache Airflow
  • Development experience with Java or Scala
  • Experience managing or supporting EMR or SPARC clusters
  • Experience optimizing the performance of data pipelines or large-scale data systems
  • Experience working with Hive or Iceberg
  • Familiarity with implementing cloud-based security best practices
  • Experience using automated code deployment processes

Responsibilities

  • Design, develop, and maintain robust data pipelines to support ingestion, transformation, and delivery of large datasets
  • Collaborate with software engineers and technical teams to integrate data processing solutions into existing systems and applications
  • Develop and optimize SQL queries for efficient data retrieval, transformation, and analysis
  • Manage and maintain relational databases including PostgreSQL, Oracle, and MySQL
  • Develop automation scripts using Python and Linux shell scripting to support data processing workflows
  • Operate within AWS cloud environments to support data infrastructure and pipeline orchestration
  • Structure and deliver data outputs in formats suitable for downstream processing and analysis, including JSON
  • Support the preparation and management of both structured and unstructured datasets
  • Ensure reliability, performance, and scalability of data pipelines and related infrastructure
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service