Sr. Beta Program Manager

Fortune BrandsDeerfield, CA
$115,000 - $145,000Hybrid

About The Position

The Sr. Beta Program Manager is a hands-on leadership role at the center of Fortune Brands Innovations’ connected product validation strategy. This is not a project coordination role — it is a quality-focused, field-oriented program lead who owns the end-to-end consumer validation lifecycle across alpha, beta, and delta phases for smart home products spanning hardware, firmware, mobile applications, and cloud software. This role demands a candidate who combines the analytical rigor of a QA engineer, the cross-functional fluency of a product manager, and the field instincts of a hands-on technologist. You will be a trusted partner to QA, Engineering, Software Product Management, Hardware (Category) Product Management, UX, and Customer Support — and a vocal, data-backed advocate for real-world product quality. This role requires someone who rolls up their sleeves: reproducing reported issues, triaging bug quality before JIRA import, differentiating true defects from feature requests, and holding the line on release readiness — backed by data, not diplomacy. POSITION LOCATION: This position can work hybrid in either Deerfield, IL or San Francisco, CA.

Requirements

  • BA/BS degree in a technical, product, or business field; or equivalent practical experience.
  • 7+ years of hands-on experience in beta/alpha/delta program management, user validation, or closely related roles in consumer hardware, IoT, or multi-platform software products.
  • Demonstrated experience personally triaging, reproducing, and classifying bugs; proven ability to partner with QA in bug scrubs, validate reproducibility, and enforce submission quality standards.
  • Strong technical fluency across the connected product stack: mobile applications (iOS/Android), cloud/backend services, embedded firmware, and consumer hardware.
  • Hands-on experience with Centercode (or equivalent beta management platforms) and JIRA: configuring feedback templates, managing Centercode-to-JIRA import workflows, and building reporting well beyond default platform outputs.
  • Experience building two-audience reporting: detailed cross-functional reports for Engineering/QA/Product and executive summaries with business impact framing.
  • Experience with business intelligence tools (Power BI, Tableau, or equivalent) for dashboard creation and trend analysis.
  • Proficiency with Confluence, Excel (Pivot Tables, advanced formulas), and SQL.
  • Ability to manage multiple simultaneous programs without sacrificing quality.
  • Excellent written and verbal communication skills; ability to write precise bug reports, executive summaries, and stakeholder documentation.

Nice To Haves

  • Experience with AI productivity tools — particularly Claude AI — for accelerating documentation, analysis, and reporting.
  • Background in quality assurance (QA) engineering, software testing, or hardware validation; comfort reading device logs, crash reports, and firmware build notes.
  • Experience with delta testing and post-launch product validation cycles.
  • Familiarity with agile/scrum methodologies and SDLC processes.
  • Experience at a leading consumer electronics, smart home, or IoT-focused organization with hardware, firmware, mobile, and cloud software products.
  • Experience with Voice of Customer (VoC) frameworks and translating customer insights into action.
  • Familiarity with mobile app testing (iOS/Android), OTA firmware update flows, and cloud service dependencies in connected product ecosystems.
  • Understanding of prototype security protocols, hardware NDA management, and trade compliance requirements.
  • Experience managing stakeholders from individual contributors to C-suite.
  • Knowledge of user research methodologies.
  • PMP or equivalent program management certification.

Responsibilities

  • Consumer Validation Program Ownership Design and execute end-to-end consumer validation programs across alpha, beta, and delta stages for connected hardware, firmware, mobile apps, and cloud services.
  • Develop structured test plans, targeted test scenarios, and use-case libraries with Product Management, QA, and Engineering — ensuring coverage of real-world edge cases.
  • Manage simultaneous test programs with discipline: tester recruitment, device logistics, NDA and trade compliance, communications cadence, and issue tracking.
  • Recruit and manage diverse, global tester pools segmented by persona, device type, geography, and platform to ensure representation across target markets.
  • Monitor tester engagement; identify and address participation risks early to prevent drop-off and sustain high-quality feedback throughout test cycles.
  • Coordinate required software builds, firmware versions, and hardware units to ensure program readiness across all phases.
  • Partner with shipping and logistics teams to ensure on-time device delivery for synchronized program launches.
  • Bug Triage, Signal Quality & QA Partnership Reproduce reported bugs from Centercode prior to JIRA import — verifying steps to reproduce, capturing logs/screenshots/video, and confirming device, firmware, and app version context.
  • Work daily with QA to validate issue severity and classification ahead of bug scrubs; serve as the first quality gate for every ticket entering the engineering queue.
  • Distinguish between verified software bugs, hardware/firmware defects, environment issues, duplicate reports, and feature requests — and reduce vague or non-actionable submissions through improved survey design, targeted feedback prompts, and tester education.
  • Actively participate in cross-functional bug scrub sessions with QA, Engineering, and Product; present pre-triaged, evidence-backed issues with clear severity assessments.
  • Own the Centercode-to-JIRA workflow: enforce submission quality standards, establish import criteria, and refine feedback templates to raise the baseline quality of inbound reports.
  • Track issues from initial report through resolution; follow up with Engineering to confirm closure and communicate outcomes back to testers.
  • Data, Insights & AI-Powered Analysis Use Claude AI and other AI-powered tools to accelerate documentation, analysis, survey development, and reporting — without substituting AI output for critical judgment.
  • Synthesize survey data, telemetry, usage patterns, and qualitative tester feedback into actionable insights — not just summaries of Centercode’s default reports.
  • Identify recurring failure patterns, user experience friction points, and systemic product risks before they become post-launch customer complaints.
  • Apply critical judgment to AI-generated findings; bring domain expertise to distinguish meaningful signals from noise.
  • Build customized dashboards and trend analyses beyond out-of-the-box platform views, tailored to team and leadership needs.
  • Correlate beta feedback with Customer Support ticket trends, historical NPI data, and post-launch metrics to surface recurring or emerging issues early.
  • Cross-Functional Partnership & Stakeholder Collaboration Serve as an embedded partner to QA, Engineering, Software PM, Hardware (Category) PM, UX, and Customer Support — not a peripheral coordinator.
  • Translate real-world user feedback into concrete, prioritized product improvements that balance technical feasibility with customer impact.
  • Advocate directly for product quality concerns backed by data — including flagging release-readiness risks to leadership when field evidence warrants it.
  • Engage one-on-one with testers to gather qualitative feedback, troubleshoot reported issues, and maintain a high-quality tester experience.
  • Coordinate with global stakeholders across multiple time zones.
  • Maintain strict adherence to Prototype Security protocols and Trade Compliance requirements across all phases.
  • Reporting: Cross-Functional & Executive Maintain two reporting tracks: detailed cross-functional reports (data-dense, issue-specific, and actionable for QA, Engineering, and Product) and executive summaries (business impact, release risk, and go/no-go recommendations).
  • Develop weekly, milestone, and end-of-program reports with clear metrics: issue volume trends, severity distributions, tester engagement, signal-to-noise ratios, and reproduction rates.
  • Create presentations for senior leadership grounded in field data — not sanitized for comfort.
  • Use Power BI, Tableau, or equivalent tools to build repeatable, self-service reporting views for cross-functional partners.
  • Program Continuous Improvement & Tooling Continuously improve testing frameworks, Centercode configurations, feedback templates, and tester onboarding processes.
  • Build Confluence documentation that captures institutional knowledge: program playbooks, triage criteria, escalation paths, and lessons learned.
  • Refine tester feedback quality through survey design best practices, targeted follow-up, and tester education.
  • Monitor competitive landscape, industry trends, and evolving customer expectations to benchmark and strengthen program effectiveness.

Benefits

  • employees will participate in either an annual bonus plan based on company and individual performance, or a role-based sales incentive plan.
  • robust health plans, a market-leading 401(k) program with a company contribution, product discounts, flexible time off benefits, adoption benefits, and more.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service