Product Manager, Data Science Platform (AI/ML)

project44Chicago, IL
8hHybrid

About The Position

At project44, we believe in better. We challenge the status quo because we know a better supply chain isn’t just possible—it’s essential. Better for our customers. Better for their business. Better for the world. With our Decision Intelligence Platform, Movement, we’re redefining how global supply chains operate. By transforming fragmented logistics data into real-time, AI-powered insights, we empower companies to connect instantly, see clearly, act decisively, and automate intelligently. Our Supply Chain AI enhances visibility, drives smarter execution, and unlocks next-gen applications that keep businesses moving forward. Headquartered in Chicago, IL with a 2nd HQ in Bengaluru, India we are powered by a diverse global team that is tackling the toughest logistics challenges with innovation, urgency, and purpose. If you’re driven to solve meaningful problems, leverage AI to scale rapidly, drive impact daily, and be part of a high-performance team – we should talk. We’re hiring a technical, hands-on Product Manager to scale ML-powered products (ETA, risk, anomalies/exceptions, network insights) while building Data Science as a Platform—reusable capabilities that accelerate model development, deployment, monitoring, and governance across multiple product areas. You’ll partner daily with Data Science, ML Engineering, Data Engineering, Platform Engineering, and Product teams to ship production ML and establish repeatable, measurable ML delivery.

Requirements

  • 5–8+ years Product Management experience with 3–4+ years focused on DS/ML-driven products (or equivalent)
  • Demonstrated ability shipping end-to-end ML systems to production and iterating based on outcomes
  • Strong technical fluency across the ML lifecycle (data → features → training → serving → monitoring → retraining)
  • Ability to operate between Sr and Principal: sets direction, aligns stakeholders, and drives execution across multiple teams
  • Excellent written communication (PRDs, decision docs) and crisp cross-functional leadership
  • In-office Commitment: Employees are expected to contribute to our collaborative culture by working in the office two days weekly
  • SQL (strong) and working knowledge of Python / notebooks / data analysis workflows
  • Data platforms: Snowflake / BigQuery / Redshift / Databricks (Delta Lake)
  • Orchestration & ETL: Airflow / Dagster / Prefect, dbt, Spark
  • Streaming/event systems: Kafka / Kinesis / Flink
  • MLOps: experiment tracking/model registry (MLflow / Weights & Biases), pipelines (Kubeflow / SageMaker Pipelines / TFX / Flyte/ Tensor flow), serving (SageMaker / Vertex AI / Databricks Model Serving / KServe / BentoML)
  • Feature stores: Feast / Tecton / SageMaker Feature Store
  • Monitoring/observability: Arize / WhyLabs / Evidently + Prometheus/Grafana/Datadog
  • Platform fundamentals: Docker/Kubernetes, API design (REST/gRPC), SLAs/SLOs, security & PII basics
  • LLMs/GenAI (RAG, embeddings, vector DBs like Pinecone/Weaviate/Milvus), evaluation + guardrails

Nice To Haves

  • Experience in logistics / supply chain / transportation or other high-volume operational domains
  • Familiarity with geospatial/event-time data, carrier APIs/EDI, entity resolution, and enterprise exception workflows
  • Platform product experience with internal “developer” customers and adoption metrics

Responsibilities

  • Own the roadmap for applied ML capabilities beyond ETA (risk scoring, exception prediction, anomaly detection, carrier/network performance insights), from discovery to launch and iteration.
  • Define and deliver a DS/ML platform: feature management, experimentation, model registry, deployment patterns, monitoring/observability, governance, and self-serve tooling.
  • Translate customer workflows into ML problem statements: labels/targets, constraints, SLAs, interpretability, and “do no harm” launch gates.
  • Drive evaluation and experimentation: offline metrics/back testing, online testing (A/B, holdouts), and measurable business impact.
  • Partner with engineering on batch + real-time inference architectures, streaming/event-time feature needs, and reliability (SLOs, incident playbooks).
  • Establish and track platform success metrics: time-to-first-model, deployment frequency, reuse rate, model performance stability, incident rate, and ROI.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service