About The Position

Rewards Network is seeking a Senior Data Scientist to lead the technical evolution of their large-scale personalization and assignment platform. This role involves designing and owning data systems that match members to offers, from batch processing of massive datasets to scoring frameworks that determine what each member sees. It's a senior individual contributor position with technical leadership scope, guiding junior team members and owning workstreams end-to-end. The platform is moving towards ML-driven personalization and generative AI, and the ideal candidate will help shape this direction. This is a hybrid position requiring 3 days a week in the Chicago office (Tuesday-Thursday).

Requirements

  • Master’s degree in data science or related field
  • 5+ years of experience in a data science role
  • Strong technical foundation across both data science and data engineering
  • Proven experience designing and leading large-scale data processing systems (hundreds of millions to billions of records), including batch architecture, partitioning, staging, and performance optimization.
  • Track record designing activity-based segmentation and tiering frameworks (e.g., RFM-style models, engagement tiers, merchant activity classifications) — from threshold definition through refresh cadence and validation against business outcomes.
  • Hands-on background building scoring, ranking, or recommendation frameworks, with feature selection, weighting strategies (rule-based, heuristic, or ML-driven), and evaluation against business objectives; experience evolving such systems from deterministic scoring toward ML-based personalization.
  • Experience designing and managing customer segmentation pipelines and feature generation at scale, including the lifecycle management of member groups, derived attributes, and reusable feature sets.
  • Experience with workflow orchestration (Airflow/MWAA or equivalent) and AWS data services (S3, Glue, Aurora/PostgreSQL).
  • Strong SQL and Python skills — able to review, guide, and produce production-quality data pipeline code.
  • Understanding of event-driven architectures and Kafka-based data replication patterns.
  • Experience with or strong understanding of real-time or near-real-time data systems, and the ability to provide architectural guidance even when not the primary builder.
  • Ability to define pipeline SLAs and data freshness guarantees, including monitoring, alerting, and incident response for batch and near-real-time workflows.
  • Experience working with large-scale member or customer data in a personalization, targeting, loyalty, or recommendation context.
  • Demonstrated ability to work cross-functionally and influence without authority; self-directed and able to own a workstream end-to-end with minimal oversight.
  • Strong written and verbal communication; able to produce clear documentation and present findings to non-technical audiences.

Nice To Haves

  • Experience with offer, loyalty, dining, or hospitality platforms.
  • Familiarity with Scala or JVM-based systems, particularly in real-time API or microservice contexts that integrate with data pipelines.
  • Experience with analytics engineering (dbt or similar) or oversight of BI data model layers.
  • Familiarity with CDC-based replication patterns and data synchronization between systems.
  • Familiarity with ML model deployment and serving (AWS SageMaker, Bedrock, or equivalent), A/B testing frameworks, and an informed point of view on how foundation models and RAG-based architectures can be applied to personalization and recommendation at scale.

Responsibilities

  • Design and own the scoring framework that ranks eligible offers per member, defining features, weighting logic, and validating against business outcomes, then evolving it from deterministic scoring toward ML-driven personalization.
  • Lead segmentation and feature pipelines: member group construction, derived attributes, bucketing strategy, and reusable feature sets for eligibility evaluation and targeting.
  • Architect and optimize large-scale batch processing workflows handling hundreds of millions to billions of records, including partitioning, bulk ingestion, and performance tuning.
  • Define and operate SLAs across the pipeline: batch completion, feed delivery, attribute freshness, and assignment turnaround.
  • Provide architectural guidance on a near-real-time assignment API layer and its integration with the broader batch pipeline.
  • Define and maintain data contracts with downstream consumers (analytics marts, dashboards, adjacent platforms) and oversee the incremental build-out of analytics data models.
  • Translate between business stakeholders (product, marketing, finance) and the engineering team, comfortable holding both business and technical conversations.
  • Document architecture, data models, pipeline logic, and feature generation processes to reduce key-person dependency and support team continuity.
  • Shape the future roadmap for personalization and recommendations, including A/B testing frameworks, behavioral modeling from member activity, and the role of ML and generative AI in assignment and eligibility.

Benefits

  • Comprehensive benefits package
  • Competitive Time Off Benefits: including flexible PTO, 11 company holidays, and parental leave.
  • Generous dining reimbursement when you dine with our restaurant clients
  • 401(k) plan with a company match
  • Two medical plan options- Standard PPO or High Deductible Health Plan (HSA with company match for HDHP participants)
  • Partnership with Rx n Go, offering certain prescriptions for free
  • Two dental plan options and a vision plan
  • Flexible Spending Accounts
  • Pre-tax commuter benefit program
  • Accident, Critical Illness, and Hospital Indemnity Insurance Plans
  • Short Term and Long Term disability
  • Company-paid life insurance and AD&D insurance
  • Supplemental employee, spouse, and child life insurance
  • Employee Life Assistance Program
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service