Senior Data Engineer

tapoutsLos Angeles, CA
7dRemote

About The Position

We are looking for a Senior Data Engineer to join our growing data team. In this role, you will be responsible for designing, building, and maintaining scalable data infrastructure that powers our analytics, AI initiatives, and business operations. This is a hands-on role for someone who thrives in fast-paced environments, thinks like a platform architect, and is passionate about building data systems that matter.

Requirements

  • 5+ years of experience in data engineering or a related field
  • Strong proficiency in SQL — writing complex queries, optimizing performance, and data modeling
  • Strong proficiency in Python — building ETL/ELT pipelines, scripting, and automation
  • Experience with cloud platforms: AWS, GCP, or Azure
  • Hands-on experience with data orchestration tools (Apache Airflow, Prefect, or similar)
  • Experience with big data frameworks (Apache Spark, Kafka, Flink, or similar)
  • Familiarity with data warehousing solutions (Snowflake, BigQuery, Redshift, or similar)
  • Strong understanding of data modeling, schema design, and data architecture principles
  • A platform-first mindset — you think beyond individual pipelines and consider ownership, reliability, and long-term maintainability
  • A data-driven approach — you use metrics to measure pipeline health and continuously improve
  • Strong communication skills — you can collaborate with technical and non-technical stakeholders
  • Comfort working in ambiguous, fast-moving environments and bringing structure to chaos
  • A passion for continuous learning — you stay current with the latest tools and trends in data engineering

Nice To Haves

  • Experience with dbt (data build tool) and the modern data stack
  • Familiarity with streaming and event-driven architectures
  • Knowledge of MLOps and AI pipeline support
  • Experience with data mesh or data platform engineering
  • Familiarity with data governance frameworks and tools (data lineage, data cataloging)

Responsibilities

  • Design, build, and maintain robust, scalable data pipelines (batch and real-time/streaming).
  • Design and develop dashboards that surface key business metrics and enable strategic, data-informed decision-making.
  • Develop and optimize complex SQL queries, stored procedures, and data models
  • Write clean, production-grade Python code for data ingestion, transformation, and automation
  • Build and manage cloud-native data infrastructure on AWS, GCP, or Azure
  • Implement and maintain data lakehouse architectures (e.g., Delta Lake, Apache Iceberg)
  • Support ML workflows including feature engineering, model training pipelines, and MLOps integration
  • Ensure data quality, governance, and lineage tracking across all data assets
  • Collaborate with data scientists and analysts to deliver trusted, well-documented datasets
  • Monitor pipeline performance, troubleshoot issues, and optimize for cost and efficiency
  • Contribute to the development of internal data platform tools and frameworks
  • Apply data governance best practices and ensure compliance with data privacy regulations (GDPR, LGPD)
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service