Data Engineer

EverytableLos Angeles, CA
35d$115,000 - $150,000Remote

About The Position

Everytable was founded on the belief that healthy food is a human right and shouldnt be a luxury product. Everytable is a purpose-driven public benefit company; working with us is a unique opportunity to be a part of making history by redefining and transforming the food system. Were passionate about making a positive impact in the communities we serve through affordable food access, as well as economic empowerment and environmental wellbeing. Were looking for a Data Engineer to help build and maintain reliable, observable data pipelines and enable analytics and reporting across the business. Role Overview As a Data Engineer at Everytable, you will develop and maintain scalable pipelines that power business reporting, forecasting, and operational insights. Youll work closely with finance, marketing, and operations teams to ensure data is accurate, timely, and easy to use. Youll be responsible for building robust pipelines in Airflow, managing transformation logic in dbt, and ensuring that our data workflows are observable and dependable.

Requirements

  • 25 years of experience in Data Engineering, Analytics Engineering, Data Science or related roles
  • Strong proficiency in Python (or another scripting language)
  • Strong SQL, including advanced transformation patterns (joins, window functions, incremental logic)
  • Experience building and managing observable pipelines in Airflow
  • Experience with dbt (or similar transformation-layer tools)
  • Experience building pipelines that support analytics use cases (e.g., metrics, reporting, dashboards) and enterprise integrations (e.g., ecommerce and ERP integrations)

Nice To Haves

  • Experience with cloud data warehouses (Snowflake, BigQuery, Redshift, Postgres)
  • Familiarity with CI/CD for data pipelines (GitHub Actions, dbt Cloud, etc.)
  • Experience with data observability tools (Monte Carlo, Datadog, Great Expectations, etc.)
  • Experience with event-based pipelines or APIs (Fivetran, Singer, custom ingestion)

Responsibilities

  • Pipeline Development & Orchestration
  • Build, maintain, and optimize data pipelines using Apache Airflow
  • Ensure pipelines are observable, reliable, and easy to monitor (alerts, SLAs, runtime logging, dependency integrity)
  • Implement best practices around scheduling, retries, backfills, and incremental processing
  • Transformation & Modeling
  • Design and develop data models using dbt (or similar transformation tools)
  • Own the SQL transformation layer: write maintainable, scalable SQL models and documentation
  • Manage testing and data quality (dbt tests, constraints, anomaly monitoring)
  • Reliability & Maintainability
  • Maintain high standards for correctness, traceability, and performance
  • Improve data pipeline resilience through logging, alerts, documentation, and structured debugging workflows
  • Support ingestion processes and integrations with external/internal systems as needed
  • Cross-functional Collaboration
  • Partner with stakeholders to understand business requirements and translate them into data models and pipeline solutions
  • Provide guidance on data structure, definitions, and best practices for analysis consumption

Benefits

  • medical, dental, and vision insurance
  • unlimited paid time off
  • paid holidays
  • a 401(k) retirement plan
  • meal discounts
  • other resources designed to support employee well-being and professional growth

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

51-100 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service