About The Position

Travel is more than reaching a destination—it’s about the experiences created along the way. We partner with over 200 airline, hospitality, cruise, passenger rail, and financial services companies to transform everyday travel into extraordinary journeys. Guided by our values of ambition, innovation, and collaboration, we continuously raise the bar and believe we are better together.Join us in shaping the future of travel by unlocking the power of data. About the Role We are looking for an Intermediate Data Engineer to join our growing Data Engineering team. In this role, you will design, build, and maintain scalable data pipelines and platforms that support analytics, reporting, and product innovation. You will collaborate with engineers, analysts, and business stakeholders to ensure data is accurate, accessible, and reliable across the organization.

Requirements

  • 3+ years of experience in data engineering, data development, or data management
  • Strong hands-on experience with Snowflake and modern data warehouse concepts (data lakes, lakehouse, streaming)
  • Proficiency in Python and SQL for building and optimizing data pipelines
  • Hands-on experience with AWS services such as S3, Glue, Lambda, Redshift, and data platforms such as Snowflake.
  • Experience with ETL/ELT, data modeling, and data warehousing concepts
  • Experience with orchestration tools (Airflow, Dagster)
  • Hands-on experience with PySpark and distributed data processing frameworks (e.g., AWS EMR)
  • Knowledge of pipeline performance optimization and debugging
  • Strong problem-solving, analytical, and collaboration skills
  • Experience with version control (Git) and CI/CD workflows
  • Bachelor’s degree in Computer Science, Information Systems, Engineering, or related field

Nice To Haves

  • Exposure to real-time/streaming data (Kafka, Kinesis)
  • Knowledge of infrastructure as code (Terraform, CloudFormation)
  • Experience with RESTful API development and integration
  • Familiarity with BI or visualization tools (Tableau, Looker)
  • Knowledge of Java or Scala is a plus

Responsibilities

  • Design, develop, and maintain robust ETL/ELT pipelines to integrate data from multiple sources into a centralized cloud-based data platform
  • Build scalable data ingestion, transformation, and enrichment processes using Python, SQL, and PySpark
  • Optimize data workflows for performance, scalability, and cost efficiency in the cloud
  • Implement data quality and validation checks to ensure trust in reporting, analytics, and data-infused products
  • Collaborate with cross-functional teams to translate business requirements into technical data solutions
  • Support large-scale transformations using distributed processing frameworks
  • Troubleshoot and resolve issues in data pipelines, ensuring reliability and uptime
  • Participate in code reviews and contribute to engineering standards and best practices
  • Document data processes, pipelines, and schemas to improve transparency and reusability
  • Stay current with modern data engineering tools, practices, and cloud technologies with a passion for continual learning and knowledge sharing
  • Building with stakeholders in mind, not just raw pipelines.

Benefits

  • RRSP Matching
  • Comprehensive Health Plans
  • Flexible Paid Time Off
  • Travel Experience Perk
  • Annual Wellness Perk
  • Commuter Perk
  • Tenure-based Work From Anywhere Program
  • Parental Leave Top Up
  • Adventure Pass
  • Learning Allowance
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service