Data Engineer

CGIArlington, VA

About The Position

CGI Federal is seeking a Data Engineer to design, build, and manage data pipelines and systems. You will help clients manage large datasets to gain insights and improve decision-making. This position ideal for a candidate with strong foundational skills who is eager to grow their expertise in a fast-paced, agile environment. We take an innovative approach to supporting our client's mission of growing an enterprise analytics platform, working side-by-side using emerging technologies. This position is located in Arlington, VA.

Requirements

  • Education: Bachelor's degree in Computer Science, Engineering, or a related field.
  • Experience: 2–5 years of experience in data engineering or data management.
  • Technical Skills: Core Engineering: Strong foundational skills in ETL/ELT design, data integration, and data pipeline development.
  • Languages: Proficiency in SQL and Python is required.
  • Distributed Processing: Experience working with distributed data processing engines and scalable compute frameworks.
  • Data Platforms: Hands-on experience with modern data engineering platforms supporting pipeline orchestration, transformation, and large-scale data management.
  • Modeling: Solid understanding of data modeling principles, database relationships, and schema design for analytics and operational use cases.
  • Cloud Ecosystems: Exposure to cloud-based data services and storage technologies across major cloud providers.
  • Version Control: Familiarity with code versioning, CI/CD practices, and collaborative development workflows.
  • Soft Skills: Strong organizational and communication skills; ability to articulate technical concepts to diverse audiences.

Nice To Haves

  • Experience working within enterprise data platforms such as Palantir Foundry, Databricks, or Azure Synapse.
  • Familiarity with cloud platforms (AWS, Azure, etc.) for data engineering.
  • Knowledge of data visualization tools (e.g., Tableau, Power BI).
  • Experience with big data technologies (Hadoop).
  • Strong problem-solving and debugging skills.
  • Experience with workflow orchestration tools or job scheduling frameworks.
  • Understanding of data governance concepts, including metadata management, lineage, access control, and data stewardship.
  • Exposure to CI/CD practices, version control workflows, and automated testing in a data engineering context.

Responsibilities

  • Pipeline Development: Design, build, and maintain scalable data pipelines using modern data engineering frameworks and cloud-based processing technologies.
  • Data Modeling: Work with data architects to define and maintain data models, ensuring clear relationships and efficient structures.
  • Data Transformation: Design and implement transformation workflows to produce curated datasets and a well-structured semantic layer supporting analytics and data consumption.
  • Optimization: Optimize and tune data workflows for performance, reliability, and scalability across distributed data processing environments.
  • Data Quality: Ensure data quality and integrity throughout the data lifecycle, implementing checks and validations.
  • Reporting Support: Assist in enabling data visualizations and BI reporting by ensuring curated data is readily accessible to downstream analytics tools.
  • Collaboration: Collaborate with stakeholders, including data scientists and business analysts, to define data requirements and deliver scalable, well-documented solutions.

Benefits

  • Competitive compensation
  • Comprehensive insurance options
  • Matching contributions through the 401(k) plan and the share purchase plan
  • Paid time off for vacation, holidays, and sick time
  • Paid parental leave
  • Learning opportunities and tuition assistance
  • Wellness and Well-being programs
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service