Data Engineer

Accenture Federal ServicesSuitland, MD

About The Position

We are looking for a skilled and passionate Data Engineer to join our team. You will play a critical role in designing, building, and maintaining our data infrastructure to ensure seamless data flow, scalability, and reliability. You will work closely with data scientists, analysts, and other stakeholders to develop efficient data pipelines, manage large datasets, and integrate machine learning models into production environments.

Requirements

  • 2 years of experience as a Data Engineer or similar role.
  • Strong proficiency in Python or other programming languages relevant to data engineering.
  • Hands-on experience with data pipeline orchestration tools (e.g., Apache Airflow, dbt, Prefect, Dagster).
  • Solid understanding of cloud platforms (AWS strongly preferred; GCP or Azure experience also considered).
  • Expertise in SQL and familiarity with relational and columnar databases (e.g., PostgreSQL, Snowflake, BigQuery).
  • Knowledge of big data processing frameworks (e.g., Apache Spark, Databricks, or Apache Kafka).
  • Familiarity with machine learning workflows and experience implementing MLOps tools (e.g., Amazon SageMaker, MLflow, or Kubeflow) in production environments.
  • Strong troubleshooting skills and experience monitoring data pipelines and system health using tools like Amazon CloudWatch, Datadog, or Great Expectations.
  • Excellent communication skills and a collaborative mindset, with a focus on documentation and best practices.
  • An active TS/SCI federal security clearance is required

Nice To Haves

  • Experience working with large-scale distributed systems.
  • Knowledge of data governance and security best practices.
  • Proven ability to work in cross-functional teams and contribute to problem-solving and innovation.

Responsibilities

  • Write clean, efficient, and scalable code to build and optimize data solutions using programming languages like Python.
  • Design, build, and orchestrate robust and reliable data workflows using tools such as Apache Airflow, dbt, Prefect, or Dagster.
  • Work comfortably in cloud environments, with a strong preference for experience in AWS. Experience in GCP or Azure is also highly valued.
  • Extract, integrate, and ensure the quality of data from various sources using tools and technologies such as SQL, PostgreSQL, Snowflake, Amazon Redshift, or BigQuery.
  • Leverage frameworks like Apache Spark, Databricks, or Apache Kafka to process and manage large-scale data workflows with reliability and efficiency.
  • Support the implementation, deployment, and scaling of machine learning models in production environments using tools like Amazon SageMaker, MLflow, or Kubeflow.
  • Monitor data pipeline health, troubleshoot issues, and ensure data consistency using tools such as Amazon CloudWatch, Datadog, or Great Expectations.
  • Work closely with data scientists, analysts, and other stakeholders to understand data requirements, communicate solutions, and document processes using tools like Git, Jira, and Confluence.

Benefits

  • Accenture Federal Services offers a wide variety of benefits.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service