EverCommerce - Data Engineer II

EverCommerce
1d$120,000 - $140,000Remote

About The Position

EverCommerce is a leading service commerce platform that helps service-based businesses run, grow, and scale. From scheduling and payments to marketing and customer engagement, we power the operational backbone for thousands of businesses. Data is core to how we build great products, make smarter decisions, and deliver value to our customers. Our Data Engineering team enables trusted analytics, real-time insights, and emerging AI-driven experiences across the EverCommerce ecosystem. We are seeking a Data Engineer II to help design and scale our modern data platform, supporting analytics, self-service BI, real-time use cases, and AI-powered insights. You’ll work closely with analytics engineers, product teams, and business stakeholders to deliver reliable, high-quality data that drives measurable business outcomes. This role is hands-on and impact-focused, ideal for someone who enjoys building Lakehouse-based platforms, enabling streaming data, and supporting AI and GenAI use cases in production.

Requirements

  • 5+ years experience in a Data Engineering position
  • Strong experience with Python and SQL
  • Hands-on experience with Apache Airflow5
  • Experience working with Databricks
  • Expertise using DBT for transformations and analytics modeling
  • Experience building streaming data pipelines with Kafka
  • Experience with data ingestion tools such as Fivetran
  • Working knowledge of Apache Iceberg and modern Lakehouse architectures
  • Experience implementing data quality checks, testing frameworks, and pipeline observability
  • Familiarity with AWS services including Athena, EC2, and cloud-based data platforms
  • Strong understanding of data modeling, analytics, and semantic layer design

Nice To Haves

  • Experience enabling AI or GenAI use cases on top of analytics platforms (e.g., Databricks Genie)
  • Experience delivering self-service BI solutions (e.g., ThoughtSpot)
  • Knowledge of data governance, metadata management, and data catalogs
  • Experience supporting SaaS or multi-product platforms
  • Familiarity with privacy, compliance, and secure data access patterns

Responsibilities

  • Design, build, and operate scalable batch and streaming data pipelines
  • Develop and orchestrate workflows using Apache Airflow
  • Implement transformations and analytics-ready datasets using DBT
  • Build and maintain real-time pipelines using Kafka
  • Leverage Databricks for data processing, analytics, and AI enablement
  • Support AI and GenAI use cases, including enabling high-quality data access for tools like Databricks Genie
  • Design and optimize data storage using Apache Iceberg and Lakehouse architecture
  • Ingest and manage data from diverse internal and external sources using Fivetran
  • Handle a wide variety of data structures (structured, semi-structured, and event-based data)
  • Build and maintain a semantic layer that enables trusted reporting and self-service analytics
  • Implement data quality frameworks, monitoring, and unit test automation to ensure reliability at scale
  • Partner with BI, product, and engineering teams to deliver data that is intuitive, trusted, and actionable
  • Optimize performance, scalability, and cost across AWS services such as Athena, EC2, and related tooling
  • Contribute to data platform standards, documentation, and best practices

Benefits

  • Flexibility to work where/how you want within your country of employment – in-office, remote, or hybrid
  • Continued investment in your professional development
  • Day 1 access to a robust health and wellness benefits package, including an annual wellness stipend.
  • 401k with up to a 4% match and immediate vesting
  • Flexible and generous (FTO) time-off
  • Employee Stock Purchase Program.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service