ICF is looking for an enthusiastic Data Engineer to join our team and help with Data Management and Data Analysis. If you are Data Engineer interested in applying your expertise in Data Engineering in a consulting environment, then this may be the role for you. Job Location: This position requires that the job be performed in the United States. If you accept this position, you should note that ICF does monitor employee work locations and blocks access from foreign locations/foreign IP addresses, and also prohibits personal VPN connections. You may be asked to travel once a quarter to an office. Our core work hours are 10am - 4pm Eastern Time with the option to start earlier or work later depending on your time zone. However, please note our client is on the east coast and may sometimes start a meeting earlier than 10:00 which may require your participation. What You Will Do: Create Dashboards using AWS QuickSight with both visuals (charts, graphs, etc...) as well as tables for end users to slice and dice data to gain insights to various business processes. Design and maintain scalable Spark-based data ingestion pipelines with adaptive change management to accommodate evolving business needs and technical requirements. Lead centralized orchestration for both batch and event-driven workflows, ensuring seamless and efficient data movement throughout the platform. Develop reusable templates and self-service solutions to enable efficient updates and enhancements to data models, empowering teams to manage changes independently. Optimize distributed compute resources to enhance performance, reliability, and cost-effectiveness of data processing environments. Define and enforce data contracts, manage schema versioning, and automate metadata processes to uphold reliable data standards and strong governance. Collaborate in a federated model to operationalize essential compliance requirements, including handling personally identifiable information (PII), data retention, and maintaining consistent naming conventions across datasets. Enforce robust data quality checks—including schema validation, handling of nulls, uniqueness, volume, freshness, and distribution metrics—as well as referential integrity across all datasets. Embed orchestration of data quality checks at various checkpoints within the pipeline to ensure ongoing compliance and reliability. Log, audit, and measure all quality results to provide transparency, accountability, and continuous improvement in data quality management. Leadership & Execution Work with architects as a technical leader, contributing to the establishment of engineering standards, best practices, and guiding critical design decisions. Partner with business and domain owners to understand domain data structure and translate requirements into reliable and scalable data products. Lead incident triage, conduct root cause analysis, and drive continuous improvements in platform reliability and data quality. Define and track key performance indicators (KPIs) for data quality, freshness, stability, adoption, and cost.Demo work in both small and large virtual settings with clients and end users to obtain feedback on enhancing dashboards to meet business requirements. Work within a SAFe scaled agile framework, collaborating with other team members to ensure solutions meet client needs with the highest quality.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
1-10 employees