Senior Data Engineer

Royal Caribbean Cruises LtdMiami, FL
Onsite

About The Position

Royal Caribbean Group's Analytics Team has an exciting career opportunity for a full-time Senior Data Engineer. This position is responsible for delivering, managing, and operating scalable trusted data products and platforms that enable trusted analytics, AI/ML, and Generative AI use cases. The Senior Data Engineer will lead the curation of datasets and data pipelines created by various business departments, data scientists, and other technology teams. They will utilize innovative tools and techniques to automate data preparation and integration tasks, minimizing manual processes and improving productivity. The role also involves developing and improving standards for quality development, testing, and production support, and acting as an innovation catalyst by prototyping new approaches and turning them into production-grade capabilities.

Requirements

  • Bachelor or Master of Science in Engineering, Computer Science, Information Technology or equivalent
  • 6+ years of experience in Data Warehouse design and data modeling patterns (relational and dimensional)
  • 6+ years of experience with ETL tool development such as Talend or ADF
  • Must have strong analytical skills for effective problem solving
  • Ability to work independently, handle multiple tasks simultaneously and adapt quickly to change with a variety of people and work styles.
  • Must be capable of fully articulating concisely technical concepts to non-technical audiences.
  • Hands-on experience with at least one major cloud (AWS/Azure/GCP) and one warehouse/lakehouse technology (e.g., Snowflake, BigQuery, Redshift, Databricks/Lakehouse)
  • Strong proficiency in Python and/or Java/Scala; ability to build maintainable services and libraries
  • Experience building or operating streaming pipelines using Kafka/Kinesis/Pub/Sub
  • Experience with Spark (or equivalent) and a workflow orchestrator (e.g., Airflow) plus familiarity with CI/CD and automated testing
  • Experience partnering with data science/ML teams, supplying training-ready datasets/features, and designing data products that support ML in production
  • Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management.
  • Strong experience with popular database programming languages including SQL, PL/SQL, T-SQL, others for relational databases
  • Strong experience in one of the following tools: ADF or Talend
  • Strong experience with relational SQL (Oracle, MSSQL, MySQL) and NoSQL databases such as Couchbase
  • Strong experience with various Data Management architectures like Data Warehouse, Data Lake and the supporting processes like Data Integration, Governance, Metadata Management
  • Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies. These should include [ETL/ELT, data replication/CDC, message-oriented data movement] and data ingestion and integration technologies such as stream data integration, and data virtualization.
  • Strong experience in working with and optimizing existing ETL processes and data integration and data preparation flows and helping to move them in production
  • Strong experience writing and optimizing advanced SQL queries in a business environment with large-scale, complex datasets
  • Strong experience of data warehousing and data lake best practices within the industry
  • Experience with cloud data platforms such as Databricks, Snowflake, BigQuery, or Redshift.
  • Strong experience and hands-on experience with scripting languages: Python, Scala, Java, etc …
  • Working knowledge of relational and dimensional data modeling patterns.
  • Working knowledge of the essential elements of data architecture, platforms and products.
  • Working knowledge to build and launch new data models
  • Addresses stakeholder concerns by utilizing business data modeling, including data entities, attributes and their relationships.

Nice To Haves

  • Experience with GitHub Copilot and Databricks Assistant a plus
  • Nice to have: Experience with unstructured document ingestion, chunking, embeddings, vector databases, and retrieval patterns

Responsibilities

  • Designs and develops durable, flexible, and scalable data pipelines, data load processes and frameworks to automate the ingestion, processing and delivery of both structured and unstructured batch and real-time streaming data.
  • Develop reusable data products and curated datasets aligned to enterprise domains.
  • Implement modern ELT and distributed data processing patterns.
  • Conduct performance tuning of ETL processes for large volumes of data, develop and oversee monitoring systems to ensure data loads complete on schedule and data is accurate.
  • Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
  • Identifies ways to improve data reliability, efficiency and quality.
  • Creates and maintains technical design documentation.
  • Assists with requirements gathering.
  • Enable AI/ML and GenAI: Deliver governed training/inference datasets and feature foundations; partner with ML/AI engineers on data access patterns that support ML pipelines and production ML deployments
  • Identify opportunities to simplify architectures, automate manual processes, improve developer experience, and evaluate new tools/techniques through controlled prototypes
  • Participates in planning, applies design patterns, and performs code reviews.
  • Follows standards, processes and methodologies to develop each phase of data architecture (e.g. data manipulating processes, database technology generating processes).
  • Mentor junior engineers, raise the bar on best practices, and lead technical initiatives across teams and provides guidance.
  • Helps resolve issues regarding the implementation of data architecture components.
  • Applies DevOps principles to data pipelines to improve the cost, communication, integration, reuse and automation
  • Responsible for production support, including analyzing root cause and developing fixes to restore ETL and data operational readiness, planning and coordinating maintenance, conducting audits and validating jobs and data.
  • Position requires on-call and off-hours support.

Benefits

  • competitive compensation and benefits package
  • excellent career development opportunities
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service