Data Operations Engineer

Norwegian Cruise Line Holdings Ltd.Miami, FL

About The Position

Norwegian Cruise Line Holdings (NCLH) is seeking a Data Operations Engineer to join their team. NCLH is a leading global cruise company operating the Norwegian Cruise Line, Oceania Cruises, and Regent Seven Seas Cruises brands. The company currently operates 32 ships, employs over 35,000 shipboard crew, and visits approximately 700 different port destinations annually, with plans to expand to 34 ships and add 13 more through 2036. NCLH emphasizes a culture built on Innovation, Collaboration, Transparency, and Passion, valuing People Excellence. The Data Operations Engineer will be responsible for monitoring and maintaining data workflows, ensuring data quality, responding to incidents, enforcing security and compliance, collaborating with various teams, and driving continuous improvement in data operations.

Requirements

  • Bachelor’s degree in Computer Science, Industrial Engineering, or a related field; or an equivalent combination of education and experience.
  • 3-5 years’ experience in developing, validating, and implementing data warehouses, data systems, or cloud-based solutions (IaaS, PaaS).
  • Experience supporting business initiatives through optimized data pipelines and quality processes.
  • Proficiency with technologies and tools such as Python, PowerShell, Bash (or Spark), SQL, git, Terraform, Puppet, Docker/K8s, and Agile methods & tools like JIRA.
  • Experience with message brokers such as Event Hubs or Kafka (Confluent).
  • Hands-on with observability tools (Prometheus, Grafana, Monte Carlo, or similar tools) and data-quality frameworks.
  • Experience in on-call support, root-cause analysis, and operational runbook development.
  • Strong interpersonal, presentation, and communication skills.
  • Strong analytical and problem-solving abilities with experience managing multiple priorities in a fast-paced environment.
  • Proven success in driving cross-functional collaboration and delivering operational excellence.
  • Solid understanding of agile methodologies, QA practices, and user experience, with a focus on digital and technical acumen.
  • Familiarity with advanced analytics concepts—including AI, ML, and Data Science—and the ability to design data processes that support these initiatives alongside traditional enterprise needs.
  • Proficiency with collaboration tools (e.g., JIRA, Confluence, LucidChart) and SQL query tools.
  • Experience with configuring and managing databases (e.g., Microsoft SQL Server, Oracle, MySQL, HANA, Snowflake, Databricks, etc).
  • Knowledge of both traditional on-premises and cloud-based infrastructures (preferably AWS), including Windows and Linux-based environments.
  • Working knowledge of various open-source technologies and cloud services.
  • Excellent communication skills, with the ability to collaborate across technical and business teams.

Responsibilities

  • Configure and maintain observability tools (Prometheus, Grafana, Monte Carlo) to track data workflow health, SLIs, and SLAs.
  • Develop dashboards and automated alerts for data-pipeline anomalies, performance degradations, and infrastructure issues.
  • Implement automated data-validation tests, schema-drift detection, and anomaly detection within production pipelines.
  • Collaborate with Data Quality and Metadata teams to integrate quality checks into orchestration frameworks.
  • Participate in on-call rotations to diagnose, resolve, and document data operations incidents.
  • Conduct root-cause analyses and implement preventive measures to reduce recurrent failures.
  • Enforce data-access policies, encryption standards, and governance controls to meet SOX and PCI requirements.
  • Provide audit support by documenting data lineage, access logs, and compliance reports.
  • Partner with Data Engineering, DevOps, Data Science, and business stakeholders to prioritize operational needs.
  • Maintain clear runbooks, operational playbooks, and incident post-mortems.
  • Identify opportunities to automate manual processes and incorporate Lean/DataOps practices.
  • Stay current on emerging monitoring, quality, and incident-management tools to enhance operational maturity.
  • Perform other job-related functions as assigned.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service