Senior Product Manager, Hardware (NPU IP)

quadric, IncBurlingame, CA
$200,000 - $250,000Onsite

About The Position

Quadric has created an innovative general purpose neural processing unit (GPNPU) architecture. Quadric's co-optimized software and hardware is targeted to run neural network (NN) inference workloads in a wide variety of edge and endpoint devices, ranging from battery operated smart-sensor systems to high-performance automotive or autonomous vehicle systems. Unlike other NPUs or neural network accelerators in the industry today that can only accelerate a portion of a machine learning graph, the Quadric GPNPU executes both NN graph code and conventional C++ DSP and control code. Quadric is seeking a Senior Principal Product Manager, Hardware to own the Chimera GPNPU IP roadmap across five tracks — Compute, Efficiency, Scalability, Safety, and Integration. The IP we freeze in 2026 ships in customer silicon in 2027 and 2028. This role sets the feature list for QC and QD, drives architecture freeze decisions, and partners directly with anchor SoC customers on what they need next. You'll work with HW engineering on execution and the CPO on strategy. This role is based in Burlingame (on-site), with quarterly travel to Japan, U.S. East Coast, U.S. Midwest, and customer SoC teams worldwide.

Requirements

  • 5–8 years of PM experience on hardware or silicon products.
  • Senior IC background.
  • Substantive working conversation, no prep, on: NPU/compute architecture (dataflow, memory hierarchy, MAC-array sizing, bandwidth-vs-compute balance); datatypes and quantization (INT4, INT8, FP8, BF16, OCP MX — numerics tradeoffs, not just names); SoC integration (AMBA AXI4/ACE-Lite, CoreSight, power-domain partitioning); functional safety (ISO 26262, FMEDA, lockstep, ASIL-B/D); and the competitive landscape (Synopsys NPX, Arm Ethos, Ceva NeuPro, VeriSilicon Vivante).
  • You have run architecture reviews with sophisticated technical buyers — Tier 1 automotive, OEM SoC teams — not just sat in them.
  • Customers integrate our IP for 18–24 months before silicon ships. Decisions today appear in production volume in 2028.
  • You use agentic AI tools daily (Claude Code, Cursor, or equivalent) to produce work.
  • When engineering wants to build the elegant thing and the customer needs the workable thing, you take the workable thing every time.
  • Direct experience with at least two of: NPU/AI accelerator architecture, SoC integration, DSP/vector compute, automotive silicon, semiconductor IP licensing
  • EE/CE/CS engineering degree or equivalent depth
  • Experience with in-person technical customer reviews
  • Bay Area resident or willing to relocate to Burlingame

Nice To Haves

  • RTL or silicon-implementation background — shipped a block, sat through tape-out, or owned a microarchitecture spec end-to-end
  • Prior IP vendor experience (Synopsys, Arm, Ceva, Cadence, Imagination, VeriSilicon, Rambus, or similar)
  • Direct exposure to the AI/ML compute stack at the architecture level — dataflow, quantization, sparsity, mixed-precision tradeoffs
  • Japanese-market customer experience — our largest account and a meaningful share of the pipeline is in Japan

Responsibilities

  • PPA leadership. Own Quadric's TOPS/W and area-per-TOPS targets for QD. Identify where we're at risk of falling behind at iso-process node and force the tradeoff conversations that close the gap before silicon freezes.
  • Hardware roadmap. Translate customer requirements, competitive pressure, and architectural constraints into a track-by-track feature list with explicit gating — what ships, what slips, what gets cut.
  • Architecture freeze. Build the case for contested architecture decisions (e.g., dedicated requant unit vs. wider MAC array), run the tradeoff with the architects, and bring a recommendation to the PSC. The CPO drives the meeting; you own the input.
  • Customer engagement. Named hardware product owner for anchor accounts. Present new features, ingest formal architecture feedback, and convert it into roadmap input.
  • IP usability. Find what customers are silently working around — API wrappers, glue logic, custom RTL hooks, workaround scripts. When five customers have written the same hack, that hack is a QD feature.
  • Hardware competitive intelligence. Track Synopsys NPX6, Arm Ethos-U85, Ceva NeuPro-M, VeriSilicon Vivante, and NVIDIA Jetson at the architectural level. Deliver two formal teardowns per year with normalized PPA comparisons. Translate competitor moves into specific feature requirements before gaps appear in customer evals.
  • SoC integration positioning. Decide which integration knobs (AXI4/ACE-Lite, CoreSight, power-domain partitioning, performance counters) are exposed, which are productized, and how they're described in the integration guide.
  • Functional safety positioning. Sequence FMEDA work, lockstep configurations, and safety-island architecture. Decide what we commit to and to whom on the path from ASIL-B to ASIL-D.

Benefits

  • Competitive salary and meaningful equity
  • Medical, dental, and vision plan options starting on day one
  • 401(k) retirement plan
  • Flexible paid time off (unlimited, non-accrual)
  • Company-provided lunches and a stocked kitchen
  • Monthly parking or Caltrain pass
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service