About The Position

Primary's Select team is looking for a Research Fellow to develop a rigorous, data-backed view on AI compute demand and the hardware architecture trends shaping it, and to keep that view current as the market evolves. The work feeds live, high-stakes investment decisions, and over time becomes an ongoing intelligence function for the firm. Primary has been investing behind a clear compute thesis since early 2023: every major technology shift requires a rebuild of its underlying infrastructure, and the AI compute buildout is creating rare openings for first-principles entrants across silicon, memory, networking, and beyond. The engagement is 4 to 6 weeks of full-time consulting work, with the option to continue on retainer.

Requirements

  • 5+ years in a buy-side, sell-side, or independent research role covering AI infrastructure, semiconductors, or hyperscaler markets. Or: a research background in economics or a quantitative field with a track record of building rigorous models of complex real-world systems.
  • AI-Native: Claude or equivalent tools are already central to how you do research. You use them to run parallel investigations, build one-off tools, and work at a pace and depth a traditional process can't match.
  • Fluent on the substance: comfortable with the economics of inference at scale, the difference between prefill and decode, why agentic workloads place different demands on hardware than traditional ones, and how those distinctions shape which architectures win.

Responsibilities

  • Build a view on the demand environment. Develop a granular model of how hyperscalers and frontier labs are actually deciding to deploy the next wave of hardware capex. Track allocated megawatt capacity across the major buyers, the commitments already in place to specific hardware providers, the gap between contractually locked spending and speculative buildout, and the supply chain and macroeconomic factors that constrain it: permitting timelines, turbine availability, HBM and TSMC capacity, project-level financing, and the conditions under which the spending environment could meaningfully shift.
  • Build a view on the hardware architecture competitive landscape. The inference market is at an inflection point, with workload demands evolving and a wider range of hardware approaches competing for share. Develop a clear view on what's actually driving demand, how that's likely to shift over the next 18 to 24 months, and how the workload landscape is likely to sort out across competing architectures.
  • The work happens through a combination of model building from public disclosures and licensed data sets, targeted primary conversations, synthesis of the existing research landscape, and heavy use of AI tools to go deeper than a traditional research process could. This includes standing up a set of agents to ingest relevant data continuously with your judgment layered on top, and meeting with the Select team on a regular cadence to flag what's changed and what it means.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service