Guidehouse is seeking an experienced Data Engineer to join our Technology AI and Data practice within the Defense & Security segment. This individual will have a strong data engineering background and be a hands-on technical contributor, responsible for designing, implementing, and maintaining scalable, cloud-native data pipelines which power interactive dashboards that enable federal clients to achieve mission outcomes, operational efficiency, and digital transformation. This is an exciting opportunity for someone who thrives at the intersection of data engineering, Google Cloud technologies, and public sector modernization. The Data Engineer will collaborate with cross-functional teams and client stakeholders to modernize legacy environments, implement scalable BigQuery-centric data pipelines using Dataform and Python, and support advanced analytics initiatives for our federal client within the insurance space. Client Leadership & Delivery Collaborate with government clients to understand enterprise data architecture, ingestion, transformation, and reporting requirements within a Google Cloud Platform (GCP) environment. Communicate technical designs, tradeoffs, and delivery timelines clearly to both technical and non-technical audiences. Lead the development of extract-transform-load (ETL) and extract-load-transform (ELT) pipelines using Cloud Composer (GCP hosted Airflow), Dataform, and BigQuery to support our analytical data warehouse powering downstream Looker dashboards. Adhere to high-quality delivery standards and promote measurable outcomes across data migration and visualization efforts. Solution Development & Innovation Design, develop, and maintain scalable ETL/ELT pipelines using SQL (BigQuery), Dataform (SQLX), Cloud Storage, and Python (Cloud Composer/Airflow, Cloud Functions). Apply modern ELT/ETL and analytics engineering practices using BigQuery and Dataform to enable version-controlled, testable, and maintainable data transformations. Leverage tools such as Gitlab and Github to manage version control, merge requests, and promotion pipelines. Optimize data pipelines and warehouse performance for large-scale analytical workloads, including partitioning, clustering, incremental processing, and cost optimization to enable downstream BI utilizing Looker. Validate compliance with federal data governance, security, and performance standards. Design and document enterprise data models, metadata strategies, data lineage frameworks, and other relevant documentation, as needed. Align data from multiple discrete datasets into a cohesive, interoperable architecture, identifying opportunities for linkages between datasets, normalization, field standardization, etc. Assist with cleanup of existing data and models, including use of ETL. Practice & Team Leadership Work closely with data architects, data scientists, data analysts, and cloud engineers to deliver integrated solutions. Collaborate across Scaled Agile Framework (SAFe) teams and participate in Agile ceremonies including standups, retrospectives, and Program Increment (PI) planning. Manage tasks and consistently document progress and outcomes using Confluence and Jira. Support documentation, testing, and deployment of data products. Mentor junior team members and contribute to reusable frameworks and accelerators. Contribute to thought leadership, business development, and best practice development across the AI & Data team.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Entry Level
Number of Employees
5,001-10,000 employees