FIGMA-posted 29 days ago
Full-time • Mid Level
Remote • San Francisco, CA
1,001-5,000 employees
Professional, Scientific, and Technical Services

Figma is all-in on AI - not just as a feature, but as a foundational shift in how we build, scale, and serve our users. As our CEO and executive team have shared, AI is central to Figma's long-term product strategy, and we're investing deeply across the stack: from model quality and infrastructure reliability to data pipelines and evaluation workflows. We're hiring a seasoned Technical Program Manager (TPM) to help lead the platform side of that work - building scalable systems to support annotation, capacity planning, and model delivery. This is a technical, execution-heavy role for a seasoned TPM who would be excited to solve AI reliability, scale, and data quality in equal parts. You'll partner closely with engineering, infra, design, AI research, and product teams to manage cross-org delivery of our most important AI Services - and help ensure our infra and evaluation loops scale to meet real product demand. If you're comfortable working across quota tracking, labeling pipelines, cost modeling, and launch readiness - this role was made for you. This is a full time role that can be held from one of our US hubs or remotely in the United States.

  • Own and drive programs supporting Figma's AI platform - including annotation velocity, evaluation pipelines, and cost/capacity readiness
  • Partner with Infra and Finance to plan model scaling across providers: track token usage, forecast traffic, manage regional limits, optimize caching strategies, and reduce latency
  • Lead our internal AI Annotation Program: manage vendors and design annotators. Define task priorities, improve quality standards and increase annotator throughput
  • Support internal AIOps initiatives - model go/no-go decision making, monitor model behavior, prevent regressions, and ensure readiness across quality gates
  • Drive cross-functional execution of key AI-powered product features - coordinate scope, risks, comms, and launch checklists
  • Partner with Data Science to maintain and improve internal visibility: annotation metrics, token quotas, reliability dashboards, and evaluation timelines
  • 4+ years of technical program management experience (or equivalent) in AI platform, AI research, and AI infrastructure
  • Understand how AI gets built and scaled: model evaluation loops, annotation pipelines, quota limits, and data versioning
  • Have hands-on experience running AI cost/capacity reviews, forecast planning, and vendor oversight. Deep understanding of model cost mechanics, including token burn, cache hit rates, latency, and quota limits
  • Comfort operating in high-ambiguity, high-velocity environments with exec visibility
  • Strong writing and communication skills - you bring structure, clarity, and momentum to complex technical programs
  • Bring a systems-thinking mindset to the AI delivery pipeline, and know where to tighten loops or increase speed
  • Experience with AI vendor contracts, third-party API quotas, or multi-cloud capacity planning
  • Background in AIOps / model quality pipelines - even better if you've built or scaled them
  • Scaled contractor or external vendor teams delivering core data operations
  • Strong bias to action, self-motivation and curiosity, with desire to bring people together and deliver high quality results in the constantly evolving growth and excitement of a start-up culture
  • Familiarity with a modern scaled web stack (AWS, Sinatra, React)
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service