Associate Product Manager Winternship, AI

ArcadeTracy, CA
15dHybrid

About The Position

Arcade is building the world’s first AI physical product creation platform, where imagination becomes reality. Our platform lets anyone design, purchase, and sell custom, manufacturable products using natural language and generative AI. We believe everyone should have the power to create physical goods as easily as they post online, and we’re building the infrastructure to make that real. We’ve raised $42M from a world-class group of investors, including Reid Hoffman, Forerunner Ventures (Kirsten Green), Canaan Partners (Laura Chau), Adverb Ventures (April Underwood), Factorial Funds (Sol Bier), Offline Ventures (Brit Morin), Sound Ventures (Ashton Kutcher), Inspired Capital (Alexa von Tobel), and Torch Capital (Jonathan Keidan). Our angel investors include Elad Gil, Ev Williams, Marissa Mayer, Sara Beykpour, Kayvon Beykpour, Anna Veronika Dorogush, Eugenia Kuyda, David Luan, Sharon Zhou, Kelly Wearstler, Karlie Kloss, Colin Kaepernick, Christy Turlington Burns, and Jeff Wilke. Arcade is headquartered in San Francisco’s Presidio and led by serial entrepreneur Mariam Naficy (Minted, Eve), and a founding team with deep experience in generative AI, design systems, and supply chain. We’re pioneering a new category at the intersection of AI, personal expression, and on-demand manufacturing, and we’re building fast. Role Summary Note on required timing and availability: start remote immediately through the end of December. Onsite in San Francisco Presidio office beginning January 2 through the end of January; in-person 5 days/week. We are looking for college students who are coming back to the Bay Area for their winter break, or those at closeby colleges (Berkeley, Stanford, etc). Arcade’s Associate Product Manager (APM), AI Internship is a hands-on role for a highly organized college student to support cross-functional execution while helping shape our agent experience and the data operations behind it. You’ll project manage across engineering, research, design, and product; lead digital product work on new web-based tools and functionality; manage testing of AI tools in development; and creatively develop prompt-based testing scenarios to pressure test quality, manufacturability, and reliability. You’ll contribute to system prompt strategy, tool orchestration and routing, on-surface UX, and instrumentation across web and SMS entry points, while helping operationalize datasets, labeling/QA, evals, and HITL guardrails that lift output quality at scale.

Requirements

  • Unconventional and extraordinary talent proven in AI, engineering, design, startups, research, or unexpected fields like sports, arts, etc
  • Technically fluent builders comfortable reading source code, working in notebooks, and debugging eval pipelines
  • Agents and LLM tinkerers with experience building agentic workflows in personal, research, or production settings
  • Deeply curious AI thinkers especially across LLMs, diffusion models, vector search, tool use, and prompt evaluation
  • Clear thinkers and communicators who bring structure to ambiguity and can engage engineers, researchers, and designers alike
  • High-agency operators who move fast, take ownership, and bias toward impact
  • Tasteful, low-ego teammates who bring humility, energy, and judgment to every interaction
  • College student in Computer Science, Engineering, Statistics, Physics, Mathematics, HCI, or related field; or equivalent experience building and shipping things.
  • Experience building or shipping products (projects, internships, startups, or open-source), ideally touching AI agent features, prompts/tools, or generative UX flows.
  • Familiarity with agent stacks and evaluation concepts (LLMs, prompt tooling, retrieval/vector DBs, replay/eval harnesses) and interest in prompt scenario design.
  • Comfort with basic data operations/analysis (e.g., Python/SQL), labeling/QA workflows, and dataset curation/taxonomy; willingness to dive into data hygiene and evals.

Responsibilities

  • Project manage multi-disciplinary workstreams (product, engineering, research, design), drive weekly checklists, unblock owners, and communicate progress crisply.
  • Manage testing of AI tools in development: run structured tests, analyze traces, harden tool calls, and synthesize issues and fixes to improve latency, stability, and safety.
  • Creatively design prompt-based testing scenarios and offline/online evals that pressure test agent success criteria, taste controls, and manufacturability thresholds.
  • Help steward agent configuration (system prompt hygiene, tool selection/routing), versioning, and A/B experiments; document changes and outcomes.
  • Instrument analytics and dashboards to track conversion, delight, time-to-first-success, and on-surface quality; summarize insights and recommended next steps.
  • Support AI data operations: curate datasets, apply taxonomy and data hygiene rules, coordinate labeling/QA workflows, and build evaluation datasets that lift output quality.
  • Contribute to HITL guardrails and workflows for targeted categories; document policies, triggers, and escalation paths that balance speed with production fidelity.

Benefits

  • Competitive compensation
  • Lunch provided daily
  • Company events
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service