About The Position

Obvio AI is dedicated to preventing traffic fatalities by deploying solar-powered, AI-assisted cameras to enforce traffic laws in vulnerable pedestrian areas. Our innovative approach has already significantly reduced reckless driving in partner cities. Founded by the team behind Motive's AI dashcam and supported by Bain Capital Ventures and Khosla Ventures, we are building a global intelligence layer for safer streets.

Requirements

  • 6+ years of experience building and operating production backend or data-intensive systems at scale.
  • Meaningful experience working on ML-heavy pipelines, owning systems through their full lifecycle (design, deployment, scaling, on-call) in a context where ML inference was a first-class part of the system.
  • Hands-on experience using a workflow orchestration tool to build production pipelines.
  • Strong understanding of cloud infrastructure fundamentals (compute, queues, storage, networking), with a focus on cost, reliability, and operational simplicity.
  • Fluency in ML systems, with experience building or operating pipelines where ML inference is a core stage, understanding workload needs (throughput, GPU economics, model versioning, production performance visibility).
  • Pragmatic decision-making skills, with the ability to evaluate tradeoffs and build solutions that fit actual scale and constraints.

Nice To Haves

  • Experience with CV or video pipelines.

Responsibilities

  • Build the orchestration layer: Design and implement a scalable workflow system for event ingestion, routing, and processing, ensuring graceful failure handling at high throughput.
  • Scale the inference fleet: Construct the compute layer for parallel processing and handling burst capacity, designing worker pools, queuing, and autoscaling for GPU-bound workloads on ECS.
  • Design the data plumbing: Own the end-to-end data path from edge devices to pipeline output, including storage, metadata, and processing triggers, ensuring observability, debuggability, and auditability.
  • Build the model serving and lifecycle layer: Establish infrastructure for loading versioned CV models, handling inference reliably, optimizing GPU utilization and throughput (dynamic batching, multi-model serving, quantization, TensorRT/ONNX), and ensuring seamless model version promotion and rollback.
  • Set the engineering standard: As an early hire, develop playbooks, runbooks, deployment procedures, and testing standards for the growing team.

Benefits

  • Competitive compensation
  • Early-stage equity
  • Opportunity to build a world-class ML platform organization
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service