Senior Data Engineer

PatternLehi, UT

About The Position

Pattern accelerates brands on global ecommerce marketplaces leveraging proprietary technology and AI. Utilizing more than 66 trillion data points, sophisticated machine learning and AI models, Pattern optimizes and automates all levers of ecommerce growth for global brands, including advertising, content management, logistics and fulfillment, pricing, forecasting and customer service. Hundreds of global brands depend on Pattern’s ecommerce acceleration platform every day to drive profitable revenue growth across 60+ global marketplaces—including Amazon, Walmart.com , Target.com , eBay, Tmall, TikTok Shop, JD, and Mercado Libre. As a Senior Data Engineer, you will be a high-impact "Game Changer" responsible for architecting and building the very foundation of Pattern's data-driven future. You will tackle massive, petabyte-scale challenges, transforming raw data into high-octane fuel for our AI models and global marketplace strategies. This is your chance to lead high-stakes technical initiatives that directly accelerate growth for hundreds of global brands in a fast-paced, elite engineering environment.

Requirements

  • Bachelor’s degree in Computer Science, Data Science, or a related technical field (or equivalent experience).
  • 7+ years of professional data engineering experience with a heavy focus on ETL/ELT and data modeling.
  • 5+ years of expert-level SQL mastery, including window functions, CTEs, and deep performance tuning.
  • 4+ years of professional Python development specifically tailored for data pipelines and tooling.
  • 3+ years of hands-on experience building/optimizing large-scale data warehouses like Snowflake, BigQuery, or Redshift.
  • Proficiency with open-source frameworks such as Apache Spark, Trino, Kafka, and Debezium.
  • A "Data Fanatic" mindset with experience handling petabyte-scale diverse datasets.

Nice To Haves

  • Expertise in AWS services (Terraform, EKS, Lambda)
  • Experience with Apache Iceberg or Delta Lake
  • Background in real-time streaming (Kafka/Kinesis)

Responsibilities

  • Designing and implementing robust ETL/ELT pipelines using Airflow, DBT, and cloud-native architectures.
  • Writing sophisticated, production-grade Python code to automate data orchestration and processing.
  • Building/optimizing complex SQL queries and dimensional models for OLAP and OLTP based systems
  • Collaborating with cross-functional teams to ingest and harmonize data from dozens of global marketplaces.
  • Building and maintaining infrastructure-as-code and containerized workflows to ensure platform reliability.
  • Leveraging AI thoughtfully to optimize processes and workflows

Benefits

  • Unlimited PTO
  • Paid Holidays
  • Onsite Fitness Center
  • Company Paid Life Insurance
  • Casual Dress Code
  • Competitive Pay
  • Health, Vision, and Dental Insurance
  • 401(k) match. Pattern matches 100% of the first 3% in eligible compensation deferred and 50% of the next 2% in eligible compensation deferred.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service