About The Position

We are looking for a Staff Data Engineer to own the telemetry data platform for vehicle-generated data. This role is for ingestion, infrastructure, and large-scale data processing — ensuring raw telemetry flows are reliable, scalable, cost-efficient, and analytics-ready. You will operate at the intersection of data engineering and backend systems, designing pipelines that ingest, process, and serve high-volume, high-velocity vehicle telemetry to downstream analytics, ML, and visualization systems. This is a platform ownership role, not a dashboarding or reporting position.

Requirements

  • 10+ years of experience in data engineering and/or backend platform engineering operating production systems at scale
  • Deep hands-on experience with large-scale telemetry or IoT data, including high-throughput and low-latency ingestion
  • Strong expertise in AWS data and infrastructure services (S3, Kinesis/MSK, Glue, EMR, Lambda, Step Functions, EventBridge)
  • Proven experience owning end-to-end ETL/ELT infrastructure using Spark/PySpark (batch and streaming) on Databricks or EMR
  • Solid understanding of streaming architectures using Kafka or equivalent systems and time-series–optimized storage patterns
  • Strong backend engineering skills using Python and/or Java/Scala, including API design (REST/gRPC) and distributed systems fundamentals
  • Experience with data platform architectures such as data lakes and lakehouses, schema registries, and metadata systems
  • Hands-on experience with orchestration frameworks (Airflow, MWAA, Dagster) and production-grade observability (logging, metrics, tracing)
  • Infrastructure-as-code expertise using CloudFormation, Terraform, or CDK to manage scalable and reliable systems
  • A track record of building highly reliable, fault-tolerant systems with clear ownership, strong SLAs, and operational excellence

Nice To Haves

  • Experience with vehicle, sensor, or IoT data
  • Streaming-first architectures
  • Experience supporting real-time inference pipelines
  • Prior Staff or Principal-level ownership of data platforms

Responsibilities

  • Design and own large-scale ingestion pipelines for vehicle telemetry data (events, metrics, time-series) with high throughput and low latency
  • Architect and operate end-to-end ETL/ELT systems from raw ingestion to warehouse/lake consumption
  • Define schema evolution, versioning, and backward-compatibility strategies for telemetry data at scale
  • Build safe and repeatable backfill, replay, and reprocessing mechanisms for historical and real-time data
  • Design data storage and lifecycle strategies across hot, warm, and cold paths to balance cost and performance
  • Develop fault-tolerant, observable, and debuggable pipelines with strong SLAs around freshness, completeness, and latency
  • Implement backend services and APIs for telemetry ingestion, configuration management, metadata, and orchestration
  • Apply strong software engineering practices including object-oriented design, automated testing, CI/CD, and code reviews
  • Establish automated data quality checks, anomaly detection, alerting, lineage, and auditability across the platform
  • Provide technical leadership by setting platform direction, reviewing designs, mentoring engineers, and influencing product and engineering roadmaps

Benefits

  • Robust health coverage — excellent health, dental, and vision insurance covered up to 100% by ALSO, with FSA & HSA options
  • One Medical membership and dedicated insurance advocates
  • Rich fertility and family-building benefits with Progyny
  • Flexible time off
  • 401(k) match
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service