Senior Data Platform Engineer

Rocket MoneyWashington, DC
$160,000 - $200,000Onsite

About The Position

Data Platform Engineers at Rocket Money further our mission by building and maintaining the infrastructure that enables our company to understand our users and products through reliable, scalable data systems. We build the foundational platform that ingests, processes, and serves data—enabling our Data Analytics, Machine Learning, and Software Engineering teams to build data products efficiently. We work closely with engineering teams to ensure data is captured correctly at the source, design resilient pipelines, and create self-service capabilities that empower stakeholders. The Data Platform team is in an exciting phase of growth and formalization. While we have established tools and processes in place (dbt, BigQuery, Terraform), we're actively building the comprehensive standards, practices, and systems that will scale with Rocket Money as it grows. We're looking for engineers who can take ownership and evolve workflows into reliable, well-documented, production-grade systems. This is an opportunity to shape architectural decisions, establish team practices, and define how Rocket Money works with data for years to come. We have a strong preference for process-oriented systems thinkers who excel in balancing stakeholder requirements, technical debt, and organizational complexity—and who thrive in environments where they need to create structure from ambiguity. You'll need to be comfortable making principled decisions, taking ownership not just of code but of outcomes, and operating with significant autonomy as we build the platform together. In this role, you will: Be an end-to-end owner of our data platform infrastructure, ensuring its security, usability, and performance. Work closely with analytics engineers, machine learning engineers, and software engineers to ensure the platform meets their needs. Make collaborative decisions about data tooling, pipeline design, and governance, and implement opinionated interfaces that facilitate easy and best-practice-aligned development for other teammates. Continuously improve failure rates for data sources by moving alerting “to the left”; i.e. catching and quarantining bugs/failures as close to the source as possible. Analyze patterns in how source data is generated, modeled, and consumed. Work with stakeholders to implement the best solution when new data sources are added. Take ownership of existing tools and workflows that may be functional but lack formal structure, documentation, or reliability measures. You'll assess what's working, identify gaps, and systematically improve systems to production-grade quality. Document everything you build knowing that you're creating the foundation others will build upon. Your runbooks, architecture decision records, and system documentation will become the institutional knowledge of our data platform. Proactively communicate with multiple stakeholders on platform capabilities, technical constraints, architectural decisions, project priorities, and platform support. Confidently juggle multiple projects and priorities in our fast-paced environment and work with stakeholders and platform teammates to ensure infrastructure changes, migrations, and improvements are delivered on schedule. Automate aggressively and deliberately; anything from GitHub Actions to Slack Workflows to minimize repetitive tasks. Effectively judge when the level of effort is appropriate and avoid over-engineering.

Requirements

  • 6+ years of experience working with data infrastructure, data engineering, or platform engineering within a fast-paced environment.
  • Highly proficient with SQL, Python, and cloud-based Infrastructure-as-Code (e.g. Terraform).
  • Comfortable working with bash/shell scripting.
  • 4+ years of production experience with modern data stacks including data warehouses (BigQuery, Snowflake, or Redshift), orchestration tools, managed ingestion services, and infrastructure as code (Terraform, Pulumi, or CloudFormation).
  • 2+ years of experience building and maintaining production data pipelines, whether through ELT tools, custom applications, streaming systems, or event-driven architectures.
  • Successfully "professionalized" data infrastructure before—taking scrappy, working systems and evolving them into reliable, well-documented, production-grade platforms.
  • Ability to articulate what "production-ready" means in a data context.
  • Bias toward action and aren't paralyzed by imperfect solutions.
  • Understand when "good enough for now with a plan to improve" beats "perfect but six months late."
  • Ship incrementally and iterate based on feedback.
  • Comfortable being the first person to tackle a problem.
  • Ability to take a high-level business need and figure out the technical approach.
  • Know when to ask for help and can articulate what you need.
  • Take ownership seriously—not just of writing code, but of outcomes.
  • Implement monitoring, write runbooks, create alerts, and ensure systems can be maintained by others.
  • Think about the full lifecycle of systems, not just initial delivery.
  • Strong opinions, weakly held.
  • Ability to make and defend architectural decisions, but open to feedback and willing to change course when presented with better information.
  • Ability to disagree and commit.
  • Understand that "building from scratch" doesn't mean rejecting existing tools—it means thoughtfully selecting, configuring, and integrating managed services and open-source solutions to create a cohesive platform.
  • Know when to build and when to buy.
  • Experience making big changes to critical data infrastructure.
  • Successfully re-architected, migrated, or upgraded data tooling that has strict SLA’s, without significantly affecting downstream stakeholders.

Nice To Haves

  • Led a data infrastructure migration or modernization project where you defined the vision, approach, and implementation.
  • Created internal tools, frameworks, or CLIs that improved how teams work with data (not just one-off scripts).
  • Established data platform best practices like CI/CD workflows, testing frameworks, or observability standards where none existed.
  • Expertise in cloud platforms and technologies analogous to our stack: GCP (BigQuery, Datastream, Cloud Functions, Vertex AI, GCS), dbt, Fivetran, Postgres, Python, Terraform, Looker, Retool.
  • Analogous experience: AWS (Redshift, DMS, Lambda, SageMaker, S3) or Azure (Synapse, Data Factory, Functions), Snowflake, Airbyte/Stitch, infrastructure as code tools, BI platforms.

Responsibilities

  • Be an end-to-end owner of our data platform infrastructure, ensuring its security, usability, and performance.
  • Work closely with analytics engineers, machine learning engineers, and software engineers to ensure the platform meets their needs.
  • Make collaborative decisions about data tooling, pipeline design, and governance, and implement opinionated interfaces that facilitate easy and best-practice-aligned development for other teammates.
  • Continuously improve failure rates for data sources by moving alerting “to the left”; i.e. catching and quarantining bugs/failures as close to the source as possible.
  • Analyze patterns in how source data is generated, modeled, and consumed.
  • Work with stakeholders to implement the best solution when new data sources are added.
  • Take ownership of existing tools and workflows that may be functional but lack formal structure, documentation, or reliability measures.
  • Assess what's working, identify gaps, and systematically improve systems to production-grade quality.
  • Document everything you build knowing that you're creating the foundation others will build upon.
  • Proactively communicate with multiple stakeholders on platform capabilities, technical constraints, architectural decisions, project priorities, and platform support.
  • Confidently juggle multiple projects and priorities in our fast-paced environment and work with stakeholders and platform teammates to ensure infrastructure changes, migrations, and improvements are delivered on schedule.
  • Automate aggressively and deliberately; anything from GitHub Actions to Slack Workflows to minimize repetitive tasks.
  • Effectively judge when the level of effort is appropriate and avoid over-engineering.

Benefits

  • Health, Dental & Vision Plans
  • Life Insurance
  • Long/Short Term Disability
  • Competitive Pay
  • 401k Matching
  • Team Member Stock Purchasing Program (TMSPP)
  • Learning & Development Opportunities
  • Tuition Reimbursement
  • Unlimited PTO
  • Daily Lunch, Snacks & Coffee (in-office only)
  • Commuter benefits (in-office only)
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service