Sr Data Engineer

The Walt Disney CompanyGlendale, CA
$138,900 - $203,900Onsite

About The Position

Disney Entertainment and ESPN Product & Technology is a global organization of engineers, product developers, designers, technologists, and data scientists. The team builds and advances the technological backbone for Disney’s media business globally, marrying technology with creativity to build world-class products, enhance storytelling, and drive velocity, innovation, and scalability. Ad Platforms is responsible for Disney’s industry-leading ad technology and products, driving advertising performance, innovation, and value across Disney’s sports, news, and entertainment content. The Ads Data team, part of Disney's Ad Platforms organization, aims to transform the advertising landscape across TV and streaming video through data and AI. They design and build solutions to measure and optimize the advertising lifecycle. This role focuses on designing, building, and scaling data foundations that power AI adoption across Ad Technology, owning data flow into AI-ready stores and partnering with the AI Core Engineering team. The Senior Data Engineer will design and implement robust data engineering solutions, mentor junior team members, and build resilient pipelines for high-profile AI applications, enabling faster and safer deployments across Disney Ad Tech. This role is for someone with strong technical expertise, a passion for leadership, system design, and delivering business impact through data platforms in a fast-paced, collaborative environment.

Requirements

  • 5+ years of data engineering experience, with at least 1 year in a lead or senior technical role.
  • Experience building and scaling streaming data pipelines in large-scale, distributed environments.
  • Strong skills in Python, Java and SQL with expert level skill in either Python or Java.
  • Proven experience building streaming data pipelines (e.g., Kafka, Flink, Spark, Kinesis).
  • Experience with embedding pipelines and vector stores (e.g., Pinecone, Weaviate, FAISS, pgvector).
  • Strong knowledge of data modeling, storage optimization, and retrieval patterns for large-scale systems.
  • Hands-on experience with workflow orchestration tools (Airflow, Dagster, etc.).
  • Strong collaboration and communication skills, able to partner across AI engineering, infra, and product teams.
  • Familiarity with testing, monitoring, and automation for data pipelines.
  • Bachelor or above in computer science or a related quantitative field; or equivalent practical experience demonstrating advanced technical expertise.

Nice To Haves

  • Experience integrating AI-ready data stores with LLM orchestration frameworks (LangChain, LangGraph, etc.).
  • Knowledge of observability and monitoring stacks (Datadog, Prometheus, or equivalent).
  • Background in governance and compliance practices for enterprise data platforms.
  • Experience building data frameworks, registries, or accelerators adopted by multiple teams.
  • Experience ensuring data security, lineage, and auditability in enterprise data environments
  • Skilled at writing design documentation, driving system architecture reviews, and influencing data engineering culture.
  • Experience with: Python, Java, Databricks, LangChain, Vector Stores(Pinecone, Weaviate, FAISS, pgvector), SQL, AWS big data tech stack (like S3, Glue, MWAA)

Responsibilities

  • Build and maintain high-performance streaming and batch data pipelines that power AI applications, ensuring reliable low-latency ingestion and high-throughput processing.
  • Implement and extend embedding generation workflows, vector store integrations, and retrieval pipelines supporting semantic search, RAG systems, and AI assistants.
  • Develop and optimize scalable storage and retrieval patterns, focusing on cost-efficient architecture and smooth production performance.
  • Implement AI-optimized data models and storage patterns that align with broader enterprise architecture and platform requirements.
  • Integrate pipelines with shared AI platform services (agent frameworks, registries, feature stores), ensuring clean, versioned, and reliable data delivery.
  • Build reusable ingestion, transformation, and data processing components that streamline adoption across engineering teams.
  • Embed end-to-end observability into data systems, including metrics, structured logging, automated alerts, drift detection, and failure analysis.
  • Implement robust data quality validation, schema evolution safeguards, and governance/compliance controls.
  • Ensure deployed pipelines meet high standards for reliability, recoverability, auditability, and long-term maintenance.
  • Drive execution by owning the full development lifecycle: prototyping, implementation, testing, deployment, optimization, and documentation.
  • Collaborate closely with infrastructure, ML engineering, product, and governance teams to deliver production-ready AI capabilities.
  • Lead by example through strong execution, high-quality code, and proactive problem solving.
  • Influence design direction through technical proposals and hands-on delivery rather than formal ownership of standards.

Benefits

  • A bonus and/or long-term incentive units may be provided as part of the compensation package, in addition to the full range of medical, financial, and/or other benefits, dependent on the level and position offered.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service