Uber-posted 2 months ago
$198,000 - $220,000/Yr
Full-time • Senior
Remote • San Francisco, CA
Transit and Ground Passenger Transportation

We're looking for a Senior Data Engineer who thrives on solving complex data challenges and architecting scalable, reliable systems. You'll play a critical role in designing, building, and evolving Uber's Safety & Insurance data ecosystem—enabling the next generation of safety, risk, and compliance products. As a senior member of the team, you will lead end-to-end data initiatives—from conceptual design through production deployment—while mentoring other engineers and influencing technical direction across multiple domains. This role demands strong technical depth, a passion for data excellence, and the ability to partner effectively with cross-functional stakeholders across product, analytics, and platform engineering.

  • Design, build, and maintain scalable data pipelines for batch and streaming data across Safety & Insurance domains.
  • Architect data models and storage solutions optimized for analytics, machine learning, and product integration.
  • Partner cross-functionally with Safety, Insurance, and Platform teams to deliver high-impact, data-driven initiatives.
  • Ensure data quality through validation, observability, and alerting mechanisms.
  • Evolve data architecture to support new business capabilities, products, and feature pipelines.
  • Enable data science workflows by creating reliable feature stores and model-ready datasets.
  • Drive technical excellence, code quality, and performance optimization across the data stack.
  • Mentor and guide engineers in data engineering best practices, design patterns, and scalable architecture principles.
  • Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field—or equivalent practical experience.
  • 5+ years of professional experience in Data Engineering, Data Architecture, or related software engineering roles.
  • Proven experience designing and implementing scalable data pipelines (batch and streaming) that support mission-critical applications.
  • Advanced SQL expertise, including window functions, common table expressions (CTEs), dynamic SQL, hierarchical queries, query performance optimization and materialized views.
  • Hands-on experience with big data ecosystems, such as Apache Spark (PySpark or Scala), Apache Flink, Hive / Presto, Kafka (real-time streaming).
  • Strong Python/Go programming skills and solid understanding of object-oriented design principles.
  • Experience with large-scale distributed storage and databases (SQL + NoSQL), e.g., Hive, MySQL, Cassandra.
  • Deep understanding of data warehousing and dimensional modeling (Star/Snowflake schemas).
  • Experience on cloud platforms such as GCP, AWS, or Azure.
  • Familiarity with Airflow, dbt, or other orchestration frameworks.
  • Exposure to BI and analytics tools (e.g., Tableau, Looker, or Superset).
  • Expertise in distributed SQL engines (Spark SQL, Presto, Hive) and deep understanding of query optimization.
  • Hands-on experience building streaming and near-real-time pipelines using Kafka, Flink, or Spark Structured Streaming.
  • Knowledge of OLAP systems such as Apache Pinot or Druid for real-time analytics.
  • Experience developing data quality frameworks, monitoring, and automated validation.
  • Proficiency in cloud-native data solutions (e.g., BigQuery, Redshift, Snowflake).
  • Working knowledge of Scala or Java in distributed computing contexts.
  • Demonstrated ability to mentor junior engineers and establish best practices for data infrastructure.
  • Base salary range for this role is USD$198,000 per year - USD$220,000 per year.
  • Eligible to participate in Uber's bonus program.
  • May be offered an equity award & other types of compensation.
  • Eligible for various benefits.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service