Analytics Engineer, Lifecycle Efficiency

Instacart
5h$138,000 - $174,000Remote

About The Position

We're transforming the grocery industry At Instacart, we invite the world to share love through food because we believe everyone should have access to the food they love and more time to enjoy it together. Where others see a simple need for grocery delivery, we see exciting complexity and endless opportunity to serve the varied needs of our community. We work to deliver an essential service that customers rely on to get their groceries and household goods, while also offering safe and flexible earnings opportunities to Instacart Personal Shoppers. Instacart has become a lifeline for millions of people, and we’re building the team to help push our shopping cart forward. If you’re ready to do the best work of your life, come join our table. Instacart is a Flex First team There’s no one-size fits all approach to how we do our best work. Our employees have the flexibility to choose where they do their best work—whether it’s from home, an office, or your favorite coffee shop—while staying connected and building community through regular in-person events. Learn more about our flexible approach to where we work. Overview Instacart’s Lifecycle Efficiency team sits within Data Science & Analytics and partners closely with Incentives Marketing to power how we invest, measure, and optimize customer growth. We’re looking for an Analytics Engineer to build and own the data models, pipelines, and semantic layers that enable clear, trusted measurement of a marketing portfolio with approximately millions in annual spend. In this role, you will shape the data foundation for incentives, promotions, and lifecycle communications across our marketplace. You’ll collaborate daily with Data Engineering, Product, Marketing, and Engineering to define source-of-truth datasets, standardize KPI definitions, and enable self-serve analytics at scale. Your work will unlock faster decision-making, measurable ROI, and smarter experimentation. You’ll join a focused team of 5 that values collaboration, clear thinking, and rolling up our sleeves to solve complex problems. If you thrive in a fast-paced environment, enjoy building reliable datasets from messy real-world signals, and want to drive visible impact across a high-ownership domain, we’d love to hear from you.

Requirements

  • 4+ years of experience in analytics engineering, data engineering, or BI development building production data models in a modern cloud data stack.
  • Advanced SQL proficiency (e.g., complex joins, window functions, query optimization) with a track record of performance tuning in Snowflake, BigQuery, or Redshift.
  • 2+ years implementing and maintaining dbt projects (models, tests, macros, documentation) in production with Git-based workflows.
  • Hands-on experience orchestrating ELT/ETL pipelines with Airflow, Dagster, or similar, including scheduling, dependency management, and alerting.
  • Experience building semantic layers and BI models (e.g., Looker/LookML, Semantic Layer, or equivalent) to enable reliable self-serve analytics.
  • Demonstrated use of automated data quality testing and data observability (e.g., dbt tests, Great Expectations, or similar) and ownership of documentation and lineage.
  • Bachelor’s degree in Computer Science, Engineering, Mathematics, Statistics, or a related field, or equivalent practical experience.
  • Proven success partnering cross-functionally with Product, Marketing, and Engineering to translate ambiguous requirements into scalable datasets and clear deliverables.

Nice To Haves

  • Experience supporting growth or marketing teams, including incentives, promotions, lifecycle/CRM, attribution, or incrementality measurement.
  • Proficiency in Python for data transformation, orchestration tasks, or analytics utilities within the ELT workflow.
  • Experience with experimentation data (e.g., assignment, guardrails, lift) and building datasets to support A/B tests and causal inference workflows.
  • Familiarity with data governance and cataloging (e.g., DataHub, Amundsen) and warehouse cost/performance optimization best practices.
  • Background in consumer technology, marketplaces, or e-commerce operating at scale with complex event and transactional data.

Responsibilities

  • Design, build, and maintain robust, production-grade data models (e.g., in dbt) that power incentives, promotions, and lifecycle analytics, including standardized fact/dimension tables and a consistent metrics layer.
  • Partner with Data Engineering to model source data from multiple systems (e.g., marketing platforms, event streams, transactional data) and implement efficient, auditable ELT patterns in a modern cloud warehouse.
  • Define and operationalize KPI and metric definitions for marketing efficiency and ROI; enable self-serve analytics in BI tools by implementing clean, documented semantic models and LookML (or equivalent).
  • Set and enforce data quality standards with automated testing, lineage, documentation, and monitoring to ensure stakeholders can trust dashboards and analyses used to manage millions in annual spend.
  • Collaborate with Product, Marketing, and Engineering to scope requirements, prioritize a roadmap, and deliver high-impact datasets for experimentation, attribution, cohorting, and lifecycle performance reporting.
  • Continuously improve performance, reliability, and cost efficiency of pipelines and queries; drive best practices in version control, code review, and CI/CD for analytics engineering.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service