Data Scientist, API

OpenAISan Francisco, CA
4dHybrid

About The Position

As a Data Scientist on the API team, you’ll build the measurement systems that make our platform legible and improveable. You’ll define the metrics that matter, identify and quantify developer friction, evaluate launches and platform changes, and translate data into product decisions that improve reliability and developer outcomes at scale. You’ll partner closely with Product, Engineering, Research, and Finance to ensure our metrics are trusted, our experimentation is rigorous, and our insights turn into shipped improvements. This role is based in San Francisco. We use a hybrid work model of three days in the office per week and offer relocation assistance to new employees.

Requirements

  • 10+ years of experience in data science roles within product or technology organizations (platform or developer-facing experience is a plus).
  • Expertise in statistics and causal inference, applied in both experimentation and observational studies.
  • Expert-level SQL and proficiency in Python for analytics, modeling, and experimentation.
  • Proven experience designing and interpreting experiments and making statistically sound recommendations.
  • Experience building datasets, metrics, and data pipelines that power production decision-making.
  • Experience developing and extracting insights from business intelligence tools (e.g., Tableau) and building self-serve solutions.
  • Strong product sense and an impact-driven mindset: you turn ambiguity into crisp frameworks that drive roadmap decisions.
  • Ability to build relationships with diverse stakeholders and cultivate strong partnerships across Product, Engineering, Research, and GTM teams.
  • Strong communication skills, including the ability to bridge technical and non-technical audiences.
  • Ability to operate effectively in a fast-moving, ambiguous environment with limited structure.
  • Are consistently among the first to adopt the latest AI tools, you use them daily to increase your own throughput, and you proactively turn them into durable workflows that change how your team and org operate.

Nice To Haves

  • Experience with developer platforms, APIs/SDKs, or usage-based products.
  • Experience with platform reliability analytics, incident impact measurement, or performance/cost optimization.
  • Familiarity with AI evaluation and quality measurement systems (online/offline evals, human-in-the-loop, safety/quality guardrails).

Responsibilities

  • Own the core KPI framework for the API platform, spanning developer adoption, engagement, retention, and platform health.
  • Build end-to-end funnels that identify where developers succeed or get stuck—from first integration through scaling to production.
  • Define and operationalize platform guardrails (e.g., reliability, latency, error rates, cost/efficiency) and connect them to user outcomes.
  • Design and evaluate experiments and rollouts to quantify the impact of platform and product changes.
  • Partner with product and engineering teams to improve instrumentation, data quality, and metric definitions so decisions are fast and correct.
  • Translate complex analysis into clear, actionable insights for leadership and cross-functional stakeholders.
  • Develop and socialize dashboards, tools, and self-serve data products that help teams answer product questions quickly.
  • Help establish data science standards and best practices for measuring AI platform performance and developer success.
  • Partner with other data scientists across the company to share learnings and raise the bar on measurement and decision-making.

Benefits

  • relocation assistance
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service