Lead Data Engineer

CapgeminiNashville, TN

About The Position

Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you’d like, where you’ll be supported and inspired by a collaborative community of colleagues around the world, and where you’ll be able to reimagine what’s possible. Join us and help the world’s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. We are seeking a skilled Data Engineer with hands-on experience in Databricks and Google Cloud Platform (GCP) to design, build, and optimize data pipelines and analytics solutions. The ideal candidate will have a strong background in distributed data processing, cloud architecture, and data modeling. This role partners closely with data analysts, data scientists, and business stakeholders to deliver scalable, reliable, and high-quality data products.

Requirements

  • Required Experience as a Data Engineer or similar role.
  • Strong hands-on experience with Databricks, including: PySpark/Spark Delta Lake Databricks workflows/jobs
  • Proficiency with GCP: BigQuery Cloud Storage Dataflow or Dataproc
  • Strong coding skills in Python and SQL.
  • Solid understanding of distributed systems, data warehousing, and data architecture principles.
  • Experience with CI/CD tools (GitHub, GitLab, Azure DevOps, or similar).

Nice To Haves

  • Databricks or GCP certifications (e.g., Data Engineer, Architect).
  • Experience with Terraform or other Infrastructure-as-Code tools.
  • Knowledge of ML workflows or MLOps frameworks.
  • Familiarity with data governance tools (Unity Catalog, Great Expectations, dbt, etc.).

Responsibilities

  • Design, build, and maintain ETL/ELT pipelines using Databricks (PySpark, Delta Lake).
  • Optimize pipelines for performance, cost efficiency, and scalability within GCP.
  • Develop batch and streaming data processes using Spark Streaming, and related technologies.
  • Implement data solutions leveraging GCP services such as BigQuery, Cloud Storage, Dataflow, Cloud Composer, and Vertex AI integrations.
  • Apply best practices for cloud security, IAM configuration, monitoring, and cost management.
  • Build and maintain data models, including dimensional modeling and data vault structures.
  • Implement data quality frameworks, validation rules, and automated testing.
  • Manage data versioning, governance, and lineage using tools such as Unity Catalog or GCP Data Catalog.
  • Partner with cross-functional teams to gather requirements and translate them into technical designs.
  • Provide technical guidance and influence engineering best practices across the team.
  • Contribute to documentation, architectural diagrams, and knowledge sharing.
  • Leading and managing a team of data engineers, defining and executing the data engineering strategy, and ensuring the effective delivery of data solutions.
  • Provide technical expertise, drive innovation, and collaborate with stakeholders to deliver high quality, scalable, and reliable data infrastructure and solutions.

Benefits

  • Paid time off based on employee grade (A-F), defined by policy: Vacation: 12-25 days, depending on grade,
  • Company paid holidays,
  • Personal Days,
  • Sick Leave
  • Medical, dental, and vision coverage (or provincial healthcare coordination in Canada)
  • Retirement savings plans (e.g., 401(k) in the U.S., RRSP in Canada)
  • Life and disability insurance
  • Employee assistance programs
  • Other benefits as provided by local policy and eligibility

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service