About The Position

Data is central to how EverCommerce builds products, drives decisions, and unlocks innovation. Our data platform supports analytics, real-time insights, and emerging AI-driven capabilities across the EverCommerce ecosystem. We are looking for a Senior Data Engineer to design, build, and scale a modern data platform that supports analytics, real-time use cases, and AI-enabled products. This is a high-impact, hands-on role where you will lead the development of robust, scalable data systems, mentor engineers, and partner cross-functionally to deliver trusted, high-quality data. You will also help evolve our platform toward automation and intelligent pipeline development, leveraging modern tooling and AI where it creates real efficiency.

Requirements

  • 7+ years of experience in Data Engineering or related field
  • Strong proficiency in Python and SQL
  • Deep experience with Apache Airflow and workflow orchestration
  • Expertise in DBT for data transformation and modeling
  • Strong hands-on experience with Databricks
  • Strong experience building streaming pipelines (Kafka or similar)
  • Strong hands-on experience with data ingestion tools such as Fivetran
  • Hands-on experience with building automated QA, monitoring and observability for data lake / lake house
  • Solid understanding of Lakehouse architecture and Apache Iceberg
  • Experience implementing data quality, testing, and observability frameworks
  • Familiarity with AWS ecosystem (Athena, EC2, S3, etc.)
  • Strong foundation in data modeling and semantic layer design
  • Proven ability to design scalable systems and influence technical direction

Nice To Haves

  • Experience enabling AI/GenAI use cases on analytics platforms (e.g., Databricks Genie or similar)
  • Exposure to AI-assisted development tools for: Automating data pipeline generation, Accelerating ingestion-to-consumption workflows, Automating QA from ingestion to consumption, Automation DBT model generation, Improving testing, documentation, and lineage tracking
  • Experience building or leveraging metadata-driven or declarative pipelines
  • Familiarity with self-service BI tools (e.g., ThoughtSpot)
  • Knowledge of data governance, cataloging, and lineage systems
  • Experience in SaaS or multi-product ecosystems
  • Understanding of privacy, compliance, and secure data access patterns

Responsibilities

  • Design, build, and operate scalable batch and streaming data pipelines
  • Lead architecture decisions for Lakehouse-based data platforms
  • Develop and orchestrate workflows using Apache Airflow
  • Build transformations and analytics-ready datasets using DBT
  • Develop and maintain real-time pipelines using Kafka
  • Leverage Databricks for large-scale data processing and advanced analytics
  • Design and optimize storage using Apache Iceberg and Lakehouse architecture
  • Ingest and manage data from diverse sources using tools like Fivetran managed data lake
  • Build and maintain a semantic layer for trusted reporting and self-service analytics
  • Implement data quality frameworks, observability, and automated testing
  • Optimize performance, scalability, and cost across AWS services (Athena, EC2, etc.)
  • Partner with BI, product, and engineering teams to deliver actionable data solutions
  • Mentor junior engineers and contribute to engineering best practices and standards
  • Drive improvements in developer productivity and pipeline reliability

Benefits

  • Flexibility to work where/how you want within your country of employment – in-office, remote, or hybrid
  • Continued investment in your professional development
  • Day 1 access to a robust health and wellness benefits package, including an annual wellness stipend.
  • 401k with up to a 4% match and immediate vesting
  • Flexible and generous (FTO) time-off
  • Employee Stock Purchase Program.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service