Nearshore Sector | GCP Data Engineer

Devoteam Portugal
Remote

About The Position

We are seeking a detail-oriented and analytical GCP Data Engineer to join our nearshore team in Lisboa, Portugal. In this role, you will design, develop, and maintain scalable data pipelines and infrastructure on Google Cloud Platform, supporting our organization's data-driven initiatives. You will work collaboratively with cross-functional teams across multiple time zones, leveraging your technical expertise to deliver efficient, high-quality data solutions that drive business value.

Requirements

  • Proven experience as a Data Engineer with hands-on expertise in Google Cloud Platform (GCP)
  • Strong proficiency in SQL and data modeling for complex analytical requirements
  • Solid programming skills in Python or Java
  • Demonstrated experience designing and implementing ETL/ELT pipelines
  • Experience with BigQuery or similar cloud data warehousing solutions
  • Knowledge of distributed computing concepts and data architecture patterns
  • Familiarity with version control systems (Git) and collaborative development practices
  • Strong analytical and problem-solving skills with attention to detail
  • Ability to work effectively in a nearshore, distributed team environment

Nice To Haves

  • Experience with Apache Spark or similar distributed computing frameworks
  • Hands-on experience with Google Cloud Dataflow or Apache Beam
  • Knowledge of CI/CD pipelines and infrastructure-as-code practices
  • Experience with data governance and data quality frameworks
  • Familiarity with Agile or Scrum methodologies
  • Previous experience working in nearshore or remote team settings
  • Understanding of cloud security best practices and data privacy regulations

Responsibilities

  • Design and develop robust data pipelines and ETL/ELT processes using Google Cloud Platform services, including BigQuery, Dataflow, and Pub/Sub
  • Build and optimize cloud-based data warehouses and data lakes, ensuring data quality, security, and accessibility
  • Write clean, efficient, and well-documented code in Python or Java to transform and process large-scale datasets
  • Collaborate with data analysts, scientists, and business stakeholders to understand requirements and translate them into technical solutions
  • Implement data modeling best practices and maintain comprehensive data documentation and metadata management
  • Monitor and troubleshoot data pipeline performance, identifying bottlenecks and implementing optimization strategies
  • Participate in code reviews and contribute to continuous improvement of data engineering practices and standards
  • Maintain version control using Git and implement CI/CD practices for data infrastructure deployments
  • Support data governance initiatives and ensure compliance with data quality and security standards
  • Communicate effectively with distributed teams and provide technical guidance on data engineering solutions
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service