Accenture-posted 7 months ago
$110,400 - $160,300/Yr
Full-time • Mid Level
Fort Washington, MD
Professional, Scientific, and Technical Services

Accenture Federal Services is searching for a Data Engineer to design, develop and maintain data solutions for data generation, collection, and processing. This role will help to create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems.

  • Design and implement data pipelines utilizing AWS S3, Apache NiFi, and ELK Stack
  • Create automated data ingestion workflows with API integration
  • Implement NLP-based data transformation and conditioning processes
  • Ensure pipeline monitoring, maintenance, and optimization
  • Design and implement databases based on documented data models
  • Lead data storage solutions and architecture decisions
  • Establish data cleansing and normalization protocols
  • Integrate multiple data sources into consistent, machine-readable formats
  • Implement methodologies for improving data reliability and quality
  • Manage data categorization, labeling, and retention policies
  • Enforce data governance standards across projects
  • Develop and maintain data quality monitoring systems
  • Lead dashboard development and customized visualization initiatives
  • Create and maintain reporting solutions
  • Provide technical guidance for program study areas
  • Proficiency in version control and CI/CD practices for data pipelines
  • Knowledge of stream processing frameworks (Apache Kafka, Kinesis)
  • Experience with data warehousing solutions (Snowflake, Redshift)
  • Strong Python programming skills for data pipeline development
  • Advanced SQL knowledge for complex data transformations
  • Familiarity with containerization (Docker, Kubernetes)
  • Set up monitoring and alerting for pipeline failures
  • Create data validation checks and reconciliation processes
  • Develop automated testing for data transformations
  • Experience optimizing large-scale data processing jobs
  • Knowledge of query optimization techniques
  • Ability to troubleshoot performance bottlenecks
  • Implement data security best practices
  • Understanding of security protocols for sensitive data
  • 3 years of experience with creating data pipelines, ensure data quality, and implementing ETL processes
  • Experience with Python programming
  • Experience with AWS cloud
  • Experience with Apache NiFi
  • Experience with ELK Stack
  • Experience with Docker or Kubernetes
  • Bachelor's degree
  • Must obtain IAT Level II certification within 90 days of starting
  • Experience with AWS cloud services, particularly S3
  • Proficiency in Apache NiFi and ELK Stack
  • Experience with API integration and data extraction
  • Knowledge of NLP and data transformation techniques
  • Understanding of data governance principles
  • Experience with Apache NiFi / Kafka
  • Experience in relational databases
  • Experience in NoSQL solutions
  • Proficiency in Python
  • Proficiency in Java
  • Experience with ETL pipeline development and maintenance
  • Experience with Elasticsearch, Logstash and Kibana
  • 401k
  • 401k_matching
  • dental_insurance
  • health_insurance
  • life_insurance
  • paid_holidays
  • tuition_reimbursement
  • professional_development
  • flexible_scheduling
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service