Data Engineer

RED CAT HOLDINGSSalt Lake City, UT
Onsite

About The Position

We’re hiring a Data Engineer to build our data foundation from the ground up. Today, our data lives across transactional databases, SaaS tools, and external APIs. Over the next few months, you’ll help us stand up our first analytics data warehouse and build the ETL/ELT pipelines that keep it accurate, fresh, and easy to use. You won’t be doing this alone—you’ll work closely with stakeholders (Product, Ops, Finance, and Analytics) and with engineering teammates to make pragmatic choices and ship value quickly.

Requirements

  • Bachelor’s degree in computer science, data engineering, information systems, or equivalent technical degree.
  • 3+ years of data engineering experience, specifically in managing a data warehouse.
  • Strong SQL skills (joins, window functions, building reliable transformation logic).
  • Comfortable in Python (or a similar language) for pipeline code, automation, and data tool glue.
  • Understanding of data warehousing basics: tables, partitions (where relevant), incremental loads, and why modeling choices matter.
  • Some hands-on exposure to ETL/ELT or pipelines in production or in a substantial project/internship (we care about proof you can ship and debug).
  • Solid engineering habits: Git, readable code, and a willingness to write things down.
  • Must be able to walk, stand, and navigate large indoor and outdoor facilities for extended periods of time.
  • Ability to lift, carry, and move materials and equipment weighing up to 25 lbs on a regular basis.
  • May be required to climb ladders, stoop, kneel, or crouch during inspections, maintenance walk-throughs, or emergency response situations.
  • Requires frequent use of a computer and other standard office equipment for documentation, communication, and coordination tasks.
  • Must provide proof of U.S. Citizenship or Permanent Residence and must not require sponsorship for export-restricted work authorization.

Nice To Haves

  • dbt experience (models + tests + docs) or the ability to ramp quickly.
  • Orchestration experience (Airflow, Dagster, Prefect, or similar).
  • Experience with a cloud data warehouse (Snowflake, BigQuery, Redshift).
  • Experience ingesting data from APIs and messy SaaS exports (rate limits, pagination, schema drift, deduping).
  • Familiarity with basic data quality/observability practices (freshness checks, anomaly detection, monitoring/alerting).

Responsibilities

  • Build and maintain ETL/ELT pipelines that ingest data from multiple sources (e.g., internal DBs, SaaS tools, and third-party APIs), and keep them running reliably.
  • Implement scheduling/orchestration, retries, and alerting so failures are visible and recoverable.
  • Design pipelines using incremental processing patterns for scalability and cost efficiency.
  • Help set up our first cloud data warehouse (e.g., Snowflake or BigQuery) and the initial schema/layout.
  • Add data quality checks (tests, constraints, anomaly detection, reconciliation checks) and fix root causes when numbers look wrong.
  • Create lightweight observability: freshness/SLAs, pipeline run monitoring, and basic lineage documentation.
  • Make thoughtful tradeoffs between performance, usability, and maintainability.
  • Work with stakeholders to translate questions into durable datasets (not one-off queries).
  • Document sources, models, and assumptions so teammates can self-serve and onboard quickly.
  • Participate in a reasonable support/on-call rotation once the stack is live.

Benefits

  • Base pay, plus generous annual equity package and potential bonuses.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service