Let's begin! Data Engineer-Data Platform & Analytics

Moody's CorporationBoca Raton, FL
Hybrid

About The Position

At Moody's, we unite the brightest minds to turn today’s risks into tomorrow’s opportunities. We do this by striving to create an inclusive environment where everyone feels welcome to be who they are—with the freedom to exchange ideas, think innovatively, and listen to each other and customers in meaningful ways. Moody’s is transforming how the world sees risk. As a global leader in ratings and integrated risk assessment, we’re advancing AI to move from insight to action—enabling intelligence that not only understands complexity but responds to it. We decode risk to unlock opportunity, helping our clients navigate uncertainty with clarity, speed, and confidence. If you are excited about this opportunity but do not meet every single requirement, please apply! You still may be a great fit for this role or other open roles. We are seeking candidates who model our values: invest in every relationship, lead with curiosity, champion diverse perspectives, turn inputs into actions, and uphold trust through integrity. Employment eligibility to work in the U.S. is required, as Moody’s will not pursue visa sponsorship for this position now or in the future. About the team This team is mission critical working on one of the key and challenging components of Data Estates Enterprise Data Platform (CORE). We are a modern forward thinking, innovation driven data engineering supporting the largest company and financials database in the world that is the backbone of our companies success in the current and future state Ai dominated financial industry.

Requirements

  • 5+ years of experience in data engineering, building and operating production data pipelines
  • Strong hands-on experience with: Python, PySpark, Scala, SQL
  • Proven experience designing or working with configuration-driven or metadata-driven data pipelines
  • Experience working in a Databricks-based data platform
  • Solid understanding of data modeling, schema evolution, and large-scale dataset management
  • Experience deploying data solutions in AWS and Azure cloud environments
  • Working knowledge of CI/CD concepts and experience integrating data pipelines into automated delivery workflows
  • Strong collaboration and communication skills, with the ability to work across global, cross-functional teams
  • Demonstrated proficiency in artificial intelligence concepts, with hands-on experience using AI tools to streamline workflows and enhance operational efficiency.
  • Proven ability to implement AI-powered solutions to solve business challenges.
  • Demonstrates a growing awareness of AI risk management and a commitment to responsible and ethical AI use
  • Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience

Nice To Haves

  • Using dbt for data transformations, testing, and documentation
  • Familiarity with data governance, cataloging, and lineage tools
  • Experience supporting externally facing or commercial data products
  • Exposure to infrastructure-as-code or platform automation tooling
  • Knowledge of data quality frameworks and monitoring solutions

Responsibilities

  • Implement robust ETL pipelines that make a large and diverse set of data domain datasets available within our Databricks ecosystem.
  • Develop, and maintain scalable ETL/ELT pipelines that ingest, transform, and publish datasets across multiple data domains.
  • Implement and extend configuration-based pipeline frameworks to efficiently support numerous datasets with consistent patterns and controls.
  • Build data transformations and validations using Python, PySpark, and SQL within Databricks.
  • Ensure data products are well-structured, performant, and optimized for downstream consumption by internal teams and external commercial clients.
  • Partner with data producers, platform teams, and consumers to define data contracts, schemas, SLAs, and quality standards.
  • Apply best practices for data reliability, observability, performance tuning, and cost optimization.
  • Contribute to and follow CI/CD practices for data pipelines, including automated testing, deployment, and promotion across environments.
  • Develop and operate data pipelines in cloud environments, with hands-on experience across AWS and Azure.
  • Produce and maintain clear technical documentation for pipelines, configurations, and operational processes.
  • Support data governance initiatives, including data quality, lineage, and access management.

Benefits

  • medical
  • dental
  • vision
  • parental leave
  • paid time off
  • a 401(k) plan with employee and company contribution opportunities
  • life insurance
  • disability insurance
  • accident insurance
  • a discounted employee stock purchase plan
  • tuition reimbursement
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service