About The Position

Skydio is the leading US drone company and the world leader in autonomous flight, the key technology for the future of drones and aerial mobility. The Skydio team combines deep expertise in artificial intelligence, best-in-class hardware and software product development, operational excellence, and customer obsession to empower a broader, more diverse audience of drone users. From utility inspectors to first responders, soldiers in battlefield scenarios and beyond. About the Role We are seeking a Data Engineer with a systems mindset to own and simplify access to the massive amount of data generated by our fleet of autonomous drones. You will play a key role in improving how engineers, researchers, and product stakeholders interact with logs, video, sensor data, and derived analytics, making it easier to extract insights and build better autonomy features. This role will partner closely with autonomy engineers, ML researchers, QA, and platform teams to design data pipelines, build indexing/search capabilities, and develop tools that unlock the value of our data. Areas of Responsibility: Here are some domains where your work will make a difference: Data Discovery & Accessibility : Design systems to unify scattered data sources (logs, telemetry, analytics tables, media, etc.) into easily discoverable and queryable formats. Smart Dataset Generation : Enable efficient curation of machine learning datasets by tagging, indexing, and filtering for relevant scenarios (e.g., environmental conditions, sensor behavior, scene attributes). Telemetry & Log Intelligence : Build tools to automatically surface anomalies, regressions, or key signatures in logs and telemetry data (e.g., CPU usage spikes, sensor noise, degraded conditions). Software Performance Monitoring & Tooling : Develop mechanisms to rapidly compare releases and surface regressions in performance metrics, resource usage, and data quality. How you'll make an impact: Architect and maintain scalable data pipelines and services to index, enrich, and query multimodal autonomy data (e.g., time series, media, tabular analytics). Collaborate with autonomy and ML teams to understand data usage patterns and build tools that streamline their workflows. Develop efficient methods for search, tagging, and filtering over structured and unstructured data. Help design systems to label and retrieve rare or complex scenarios, both automatically at ingestion and via manual search. Build dashboards and visualizations to support release monitoring and anomaly detection across a variety of system health signals.

Requirements

  • 3+ years of experience in data engineering, backend engineering, or infrastructure roles.
  • Exposure to robotics, autonomy, or real-world sensor data pipelines.
  • Strong proficiency in Python (or similar language) and SQL.
  • Experience designing scalable data pipelines with tools such as Apache Spark, Airflow, dbt, or equivalent.
  • Familiarity with log processing, time-series analysis, or working with large volumes of semi-structured data.
  • Ability to work cross-functionally with ML engineers, autonomy engineers, and product stakeholders.
  • Systems thinking: you enjoy untangling complexity and designing elegant abstractions that empower others.

Nice To Haves

  • experience working with robotics, sensor data, or computer vision pipelines.
  • Experience building end-to-end log or telemetry analysis tools that leverage LLMs or natural language interfaces to enable intuitive querying, anomaly detection, or insight extraction.
  • Knowledge of computer vision, scene understanding, or machine learning workflows for data curation.

Responsibilities

  • Architect and maintain scalable data pipelines and services to index, enrich, and query multimodal autonomy data (e.g., time series, media, tabular analytics).
  • Collaborate with autonomy and ML teams to understand data usage patterns and build tools that streamline their workflows.
  • Develop efficient methods for search, tagging, and filtering over structured and unstructured data.
  • Help design systems to label and retrieve rare or complex scenarios, both automatically at ingestion and via manual search.
  • Build dashboards and visualizations to support release monitoring and anomaly detection across a variety of system health signals.

Benefits

  • competitive base salaries
  • equity in the form of stock options
  • comprehensive benefits packages
  • Paid vacation time
  • sick leave
  • holiday pay
  • 401K savings plan

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

501-1,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service