Hardware Architecture Expert - 3P

OpenAISan Francisco, CA
Hybrid

About The Position

We are seeking a 3P Hardware Architecture Expert with deep expertise in GPU and accelerator architectures to engage directly with silicon vendors and guide hardware decisions for AI infrastructure. In this role, you will evaluate architectural tradeoffs across compute, memory, and interconnect systems, translating vendor specifications into real-world workload impact. You will play a critical role in early silicon evaluation, benchmarking, and performance validation, helping ensure that next-generation hardware meets the needs of our workloads. This role is highly hands-on and requires both deep technical understanding and the ability to engage at a high level with partners such as NVIDIA and AMD on architectural direction and design tradeoffs.

Requirements

  • Have deep expertise in GPU or accelerator architecture, including performance and power tradeoffs.
  • Understand AI workload behavior and how it interacts with hardware design choices.
  • Are comfortable engaging directly with silicon vendors at a technical architecture level.
  • Have hands-on experience with benchmarking, profiling, and performance analysis.
  • Can translate low-level hardware details into system-level and workload-level impact.
  • Are equally comfortable in theory (architecture) and practice (measurement/validation).
  • Thrive in environments where you bridge internal teams and external partners.

Nice To Haves

  • Experience working with or at companies like (e.g NVIDIA & AMD) or similar silicon providers.
  • Familiarity with AI accelerator stacks, including GPUs, custom ASICs, or emerging architectures.
  • Experience with early silicon bring-up or hardware validation workflows.
  • Strong understanding of memory systems (HBM, DDR, cache hierarchies) and data movement bottlenecks.
  • Experience with performance tooling, microbenchmarks, and workload characterization.

Responsibilities

  • Engage deeply with silicon vendors (e.g NVIDIA & AMD) on GPU and accelerator architecture tradeoffs.
  • Analyze and interpret performance, power, and efficiency characteristics of next-generation hardware.
  • Translate vendor specifications into expected real-world performance for AI workloads.
  • Evaluate architectural aspects including: compute throughput and utilization, memory systems (HBM, cache hierarchies, bandwidth constraints), data types and precision tradeoffs (FP16, BF16, FP8, etc.), interconnect and scaling behavior.
  • Run benchmarks and profiling to validate hardware performance against workload requirements.
  • Lead early bring-up and evaluation of engineering sample (ES) silicon.
  • Partner with performance modeling and system architecture teams to align measured vs. modeled behavior.
  • Provide actionable feedback to vendors to influence future silicon design and roadmap decisions.

Benefits

  • relocation assistance

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

No Education Listed

Number of Employees

1-10 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service