Software Engineer — ETL & Data

The San Francisco Compute CompanySan Francisco, CA

About The Position

We're building the company which will de-risk the largest infrastructure build-out in history. When people finance GPU clusters, the datacenters housing them, and the infrastructure powering them, they need "offtake" - meaning someone has signed a contract to lease the cluster for a period of time before its even built. Financing a GPU cluster is inherently risky, since margins are thin and volumes are huge. Lenders don't want to take on the risk that cluster developers can't repay their loan, and cluster developers really don't want to risk not selling their cluster. As a result, risk is offloaded to the customer using fixed-price long-term contracts. If you don't mitigate this customer risk, there's a bubble. This isn't SaaS anymore - application layer companies sign multi-year contracts for computer and inference, but sell to customers on monthly subscriptions. If you mess up a purchase, it's game over: a minor shift in your revenue growth rate might mean the difference between profit or bankruptcy. But what if companies could exit their contract by selling it back to the market? Otherwise, as AI scales, compute only becomes available to folks who can effectively take on that risk. A 2-person startup in a San Francisco Victorian can't realistically sign a 5-year take or pay contract on $100m supercomputers. But they may be able to buy the month of liquidity that someone else sold back. So that's what we make: a liquid market for GPU offtake. The Role We're looking for a data-focused engineer to own and evolve our internal data infrastructure. You'll take over a lightweight but powerful OLTP-to-OLAP data pipeline and use it to define, instrument, and monitor the KPIs that matter most across the company. This isn't a "build dashboards and wait for requests" role. You'll work closely with engineering, operations, and leadership to shape what we measure and why, turning raw trading and infrastructure data into clear signals that drive decisions.

Requirements

  • Strong SQL and data modeling skills; you can write a complex analytical query without a framework
  • Experience with ETL pipelines and columnar stores (DuckDB, ClickHouse, BigQuery, or similar)
  • A bias toward simple, legible solutions over elaborate architectures
  • Ability to drive ambiguous problems to clear outcomes; you can decide what to measure, not just how

Nice To Haves

  • experience with Rill or similar BI tooling
  • familiarity with marketplace or infrastructure business models

Responsibilities

  • Own and extend our OLTP-to-OLAP data infrastructure
  • Define and maintain company-wide and team-level KPIs revenue, utilization, reliability, fulfillment rate, and more
  • Build and iterate on dashboards that surface actionable insight, not just data
  • Partner with engineers to instrument new product features from the start
  • Investigate anomalies, debug data quality issues, and improve pipeline reliability
  • Help establish data conventions and best practices as we scale

Benefits

  • Generous equity grant
  • Team members are offered a competitive salary along with equity in the company
  • Visa Sponsorships
  • Yes, we sponsor visas and work permits
  • Retirement matching
  • We match 401(k) plans up to 4%
  • Medical, dental & vision
  • We offer competitive medical, dental, vision insurance for employees and dependents and cover 100% of premiums
  • Time off
  • We offer unlimited paid time off as well as 10+ observed holidays
  • Parental leave
  • We offer biological, adoptive, and foster parents paid time off to spend quality time with family
  • Daily lunch
  • We cover lunch daily for employees
  • Unlimited office book budget
  • You can buy as many books for the office as you want
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service