Data Scientist

AngelProvo, UT
Hybrid

About The Position

Angel is changing the future of entertainment and is one of the fastest-growing distributors. Our rapidly expanding library of light-amplifying stories has grown 10x in under 2 years. Gone is the old model where the deepest pockets pick the stories we share. Angel restores choice to our 2 million guild members who decide what we produce, what we take to theaters, and most importantly what parents bring to their homes. You'll take ownership of how we understand user engagement and behavior, build predictive models, and create the data foundation that powers personalized discovery for millions of members. From recommendation systems to the metrics that drive our strategy, you'll turn data into insights and insights into action—shaping how fans discover and fall in love with Angel's ever-growing library of stories. Angel Studios is growing fast. Our content library has expanded 10x in under two years, and over two million Guild members now decide what gets produced, funded, and watched. As that library grows, the gap between what members would love and what they actually find is the most important problem on the platform. You’ll be the first dedicated data scientist on Discovery. You’ll own the analytical foundation that makes our recommendation system measurable, improvable, and eventually intelligent. Today, our recommendations run on AWS Personalize. Your work will determine how far that takes us and when we’ve outgrown it. This is a data science role, not an ML engineering role. Day one is about analytical rigor: metrics, experimentation, causal inference, and making the team smarter about our members. But we’re building toward a future where Angel owns its recommendation models end to end. If you’re a strong data scientist who wants to grow into owning models in production, this is the role where that trajectory is real and supported.

Requirements

  • Statistical rigor. You design experiments correctly: power analysis, multiple comparisons, confidence intervals, Bayesian methods where appropriate. You can explain to a non-technical stakeholder why a result is or isn’t significant.
  • Causal inference chops. You’ve worked with observational data where naive correlations are misleading. Familiar with propensity score matching, difference-in-differences, instrumental variables, or regression discontinuity. You know when to reach for them.
  • SQL and Python fluency. SQL is your first language for data exploration. Python for analysis, modeling, and automation. Your code is clean enough that someone else can read it six months later.
  • Experimentation design and analysis. You’ve designed, run, and analyzed A/B tests in production. You understand interaction effects, novelty effects, and Simpson’s paradox.
  • Communication. You translate complex analysis into clear narratives. Stakeholders trust your conclusions because you show your reasoning, name your assumptions, and flag what you don’t know.
  • Data modeling. Experience with dbt or equivalent transformation frameworks. You’ve built analytical data models that other teams actually use.
  • You write Python like a software engineer, not just a notebook user: tests, packaging, code reviews.
  • You’ve thought about what happens after an analysis becomes a model: data pipelines, feature generation, monitoring, retraining.
  • You’re curious about systems design for ML features: latency, throughput, failure modes, observability.
  • You’ve touched some part of the lifecycle around a deployed model, even if it wasn’t your primary job.
  • 6+ years as a data scientist or senior analytical role.
  • Experience with large-scale user engagement and behavior data. Streaming, entertainment, marketplace, or consumer subscription domains preferred.
  • Track record of defining metrics frameworks that stakeholders actually adopted.
  • Familiarity with modern data tools: dbt, data warehousing (Snowflake, BigQuery, Redshift), experimentation platforms (GrowthBook, Optimizely), BI tools (Rill, Looker).
  • Must be authorized to work in the United States.

Nice To Haves

  • Experience with recommendation systems or personalization is a strong plus, not a prerequisite.

Responsibilities

  • Define, instrument, and maintain the Discovery metrics framework across web, mobile, and TV. Model metrics (precision, recall, coverage, diversity), customer metrics (CTR, playthrough, completion, session depth, cold-start ramp time), and business metrics (retention segmented by recommendation engagement). You decide what we measure, how we measure it, and when a metric is lying to us.
  • Own the A/B testing and experimentation pipeline for Discovery surfaces. Design experiments with statistical rigor: sample sizing, duration, segmentation, guard-rail metrics. Build the institutional muscle so the team ships with evidence, not opinions. We use GrowthBook.
  • Decode how members discover, browse, and engage with content across three very different platforms. Identify patterns in Guild voting, theatrical-to-streaming conversion, content affinity, and churn risk. Surface the insights that change how the product team thinks about the problem.
  • Distinguish correlation from causation in engagement data, where selection bias is everywhere. When recommendation engagement correlates with retention, determine whether the system is driving retention or whether high-intent users are simply more likely to click. Design quasi-experiments when randomization isn’t feasible.
  • Build and maintain the dbt models, data pipelines, and analytical infrastructure that make data accessible and trustworthy for the Discovery team and the broader organization. If the data is wrong, nothing else matters.
  • Evaluate which new signals (voting history, explicit ratings, content metadata, theatrical engagement) improve recipe performance in AWS Personalize. Graduate from analyzing features to building them.
  • Prototype recommendation approaches (content-based filtering, hybrid models, embeddings) and evaluate them against the golden eval set you built in your first months.
  • Take a model from notebook to production: writing testable Python, managing data lifecycles (pipelines, feature stores, monitoring, retraining), and thinking about systems design (latency, failure modes, observability).

Benefits

  • Commensurate with experience and scope of responsibilities.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service