About The Position

Build the data foundation that powers Auger’s Supply Chain OS, AI systems, and execution workflows. Auger is building an operating system for supply chain teams. Our customers rely on Auger to understand reality and change it: reporting, AI-powered decision support, and write-back execution systems that operate at scale. This role is data-centric software engineering at a very high bar: you own substantial parts of the transformation layer that turns messy, customer-shared data into a unified, production-grade ontology powering analytics, AI workflows, and execution systems. This is not a “move data from A to B” role. You are expected to own semantic correctness, operability, and durability for the systems you touch—while working within platform standards and influencing improvements across the team.

Requirements

  • Degree in Computer Science, Mathematics, Statistics, or another data-intensive discipline (or equivalent practical experience).
  • 4+ years of professional development experience with strong hands-on SQL and Python in production (Spark or equivalent large-scale batch processing preferred; Scala/Flink/Beam a plus).
  • 3+ years in data work (structured and semi-structured), modern warehouses/lakehouses, and practical schema design in evolving domains — with hands-on experience in lakehouse/warehouse patterns, incremental processing, and basic performance/cost awareness.
  • Notebook fluency and the judgment to structure notebook work so it is reviewable and promotable.
  • Agent-native fluency with verification: you treat generated SQL/pipelines as proposals until proven.
  • Ownership mindset: you debug methodically, drive work to completion, connect your work to customer outcomes, and leave the codebase better than you found it.
  • Clear communication and collaboration; you can drive a feature or multi-step project in your domain, seek input on design, and incorporate feedback.

Nice To Haves

  • A plus if you have experience in supply chain, planning, or fulfillment domains.

Responsibilities

  • Own your slice of the data lifecycle (ingestion → curated layers → production-ready outputs), including medallion-style patterns and schema contracts, within guidance from senior engineers and platform conventions.
  • Practice test-driven habits for data: clarify correctness for the datasets you touch; add automated checks and regression coverage where it matters; turn bugs and incidents into fixes that stick.
  • Work in an agent-native style: use AI coding agents to move faster on investigation, SQL iteration, and refactors — paired with review, reproducible steps, and proof (tests, queries, invariants) before production.
  • Contribute to reusable patterns and tooling so the team can discover schemas, draft transforms, generate SQL faster, and troubleshoot with less one-off work.
  • Operate what you build: monitoring and alerting as appropriate, participating in incidents for your areas, executing backfills, and following through so issues do not repeat.
  • Reduce local complexity: simplify where you can, remove redundancy, and prefer shared approaches when they clearly reduce risk.
  • Partner with product, science, and platform teammates to clarify requirements, flag tradeoffs early, and deliver work that holds up beyond the first customer.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service