Data Engineer

IbottaDenver, CO
Hybrid

About The Position

Ibotta is looking for a software-focused Data Engineer to join our team and contribute to our mission to Make Every Purchase Rewarding. As a key contributor within the Data Platform Organization, you will join our newly formed Data Intelligence Squad to make Ibotta's data AI-ready. We are looking for a highly motivated data engineer who cares about the craft of clean, well-governed data and understands why it matters more than ever in an AI-driven environment. You'll work on the data contract and metadata standards that underpin everything the squad builds, the compliance tooling that keeps Ibotta's data estate clean and auditable, and the pipelines that feed our AI readiness scoring system. You'll grow into LLM application development and semantic layer work as the team evolves. This position is located in Denver, Colorado as a hybrid position requiring 3 days in office (Tuesday, Wednesday, and Thursday). Candidates must live in the United States. Not based in Denver? We will offer a relocation bonus to help make your move to the Mile High City a smooth one.

Requirements

  • 3+ years of software engineering experience with a focus on data engineering, analytics engineering, or backend development.
  • Proficiency in Python and SQL, knowledge of Databricks, and familiarity with medallion architecture (bronze/silver/gold) or similar layered data design patterns.
  • Experience implementing semantic layers to ensure AI agents utilize governed, consistent metric definitions rather than querying raw tables directly
  • Understanding of data governance concepts (metadata standards, data ownership, schema enforcement, access control) and experience keeping data assets clean and well-documented.
  • Experience with schema validation, data freshness monitoring, data observability tooling, or similar quality practices.
  • Experience using and/or building LLMs, NQLs, and other AI-related toolings, especially in relation to data analytics.
  • Proficiency in AI-assisted coding platforms (Claude, Copilot, Cursor, etc.) a plus.
  • Bachelor's degree in Computer Science, Engineering, or a related field.

Nice To Haves

  • Experience with data catalog, metadata management, or data transformation tooling
  • Exposure to data privacy, classification, or compliance requirements in a data context
  • Familiarity with configuration-as-code or automated deployment workflows
  • Experience with data observability or data quality monitoring
  • Agile development experience

Responsibilities

  • Implement and maintain data contract specifications that codify table ownership, schema expectations, and quality thresholds; enforce standards through automated validation in the deployment pipeline; collaborate with data owners to improve metadata coverage across the team's data estate
  • Contribute to an automated scoring pipeline that evaluates data assets on their readiness for AI and analytical use; build and maintain the jobs that collect quality signals and surface scores to data teams
  • Support the discovery and classification of sensitive data across the data estate; maintain tagging and lineage automation frameworks; help translate data access and privacy policies into practical guardrails in partnership with the platform team; monitor AI usage across the company, defining new avenues for AI assistance
  • Build the systems that make people actually trust what the AI tells them about our data, such as query accuracy checks, guardrails that keep AI tools on certified data, and clear attribution so users can see where an answer came from; make sure we have the monitoring in place to catch problems before they erode confidence.
  • Build and maintain the semantic framework that make data assets more legible to both people and AI systems, adding descriptions, context, and query guidance that improve the accuracy of natural language queries; work with analysts and domain experts to capture and formalize business definitions
  • Proactively leverage AI tools (e.g., Claude, Databricks Genie) to accelerate development, maintain code quality, and explore new approaches to data engineering problems
  • Partner with adjacent teams to onboard data assets into the AI-ready platform, document processes and contribute to the team's shared knowledge base
  • Embrace and uphold Ibotta's Core Values: Integrity, Boldness, Ownership, Teamwork, Transparency, and A good idea can come from anywhere.

Benefits

  • competitive pay
  • flexible time off
  • benefits package (including medical, dental, vision)
  • Employee Stock Purchase Program
  • 401k match
  • paid parking
  • snacks and occasional meals
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service