Senior Data Engineer

Quilt LLC
4hRemote

About The Position

We’re looking for a Senior Data Engineer to design, build, and optimize our data platforms so teams across the company can make fast, reliable, data-driven decisions. You’ll be a key technical leader, owning end-to-end data pipelines and modeling, and setting best practices around how we work with data. You’ll work heavily with Databricks, Spark, SQL, and Python, building scalable data solutions that power analytics, reporting, and data products. Experience in payments or financial services is a strong plus.

Requirements

  • 7+ years of professional experience as a Data Engineer, Software Engineer, or similar role.
  • Strong hands-on experience with Databricks (or a very similar cloud data platform) including cluster management, jobs, and notebooks.
  • Advanced experience with Apache Spark for batch and/or streaming data processing.
  • Expert-level SQL skills (complex joins, window functions, query optimization).
  • Strong Python skills for data engineering (e.g., PySpark, data processing libraries, scripting).
  • Proven experience in data modeling and designing schemas for analytics and reporting.
  • Experience building and maintaining data pipelines in a cloud environment (AWS, Azure, or GCP).
  • Strong understanding of data warehousing concepts, ETL/ELT best practices, and data lifecycle.
  • Solid software engineering fundamentals: version control (git), testing, code reviews, and CI/CD.
  • Excellent communication skills and the ability to collaborate with technical and non-technical stakeholders.

Nice To Haves

  • Experience in payments, fintech, banking, or broader financial services (e.g., transaction data, ledgers, risk, fraud, reconciliation).
  • Experience with streaming technologies (e.g., Spark Structured Streaming, Kafka, Kinesis, or similar).
  • Familiarity with dbt or similar transformations-as-code frameworks.
  • Experience with orchestration tools (e.g., Airflow, Databricks Workflows).
  • Knowledge of BI tools (e.g., Power BI, Tableau, Looker) and how data models power them.
  • Exposure to machine learning workflows and supporting data science teams.
  • Experience implementing data governance, lineage, and catalog tools.

Responsibilities

  • Design and build data pipelines
  • Develop, maintain, and optimize ETL/ELT pipelines on Databricks and Spark.
  • Integrate data from multiple internal and external sources into a centralized data platform.
  • Own data modeling & architecture
  • Design and maintain robust data models (e.g., star/snowflake schemas, data vault, dimensional models) to support analytics and self-service BI.
  • Establish and enforce data modeling standards and documentation.
  • Ensure data quality, reliability, and performance
  • Implement data quality checks, validation frameworks, and monitoring.
  • Tune queries and jobs for performance and cost efficiency in Databricks and downstream systems.
  • Collaborate and lead
  • Partner with data analysts, data scientists, and product/engineering teams to understand data needs and translate them into technical solutions.
  • Provide technical leadership and mentorship to other data engineers; help review designs and code.
  • Governance & best practices
  • Contribute to and refine our data governance, security, and access control practices.
  • Drive best practices around version control, CI/CD for data, and code standards.

Benefits

  • 401k investment opportunity, with company match
  • Medical, Dental, and Vision Plans
  • Paid Time Off
  • Paid Parental Leave
  • Paid Volunteer Leave
  • Fully Remote Work environment
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service