Sr. Principal Processor Architect

NeurophosAustin, TX
11hOnsite

About The Position

At Neurophos, listed as one of EE Times’ 2025 100 Most Promising Start-ups, we are revolutionizing AI computation with the world’s first metamaterial-based optical computing platform. Our design addresses the traditional shortcoming of silicon photonics for inference and provides an unprecedented AI engine with substantially higher throughput and efficiency than any existing solution. We've created an optical metasurface with 10,000x the density of traditional silicon photonics modulators. This enables a solution with 100x gains in power efficiency for neural network computing without sacrificing throughput; we've made improvements there, too. By integrating metamaterials with conventional optoelectronics, our compute-in-memory optical system surpasses existing solutions by a wide margin and enables truly high-performance and cost-effective AI compute. Join us to shape the future of optical computing. Location: San Francisco Bay Area or Austin, TX. Full-time onsite position. Position Overview: We are seeking a highly experienced Sr. Principal Processor Architect to lead the design of the processing core at the heart of our optical processing units (OPUs). This role is critical to defining the microarchitecture that bridges our revolutionary optical computing engines with efficient, scalable digital control and processing. The ideal candidate will bring deep expertise in advanced processor design, massive parallelism, and specialized accelerator architectures to create a novel compute platform optimized for AI inference workloads.

Requirements

  • PhD in Computer Science, Electrical Engineering, or related field with focus on computer architecture (or MS with equivalent experience)
  • 15+ years of experience in processor architecture and design
  • Deep expertise in pipelined processor design, including in-order and out-of-order (OoO) execution
  • Strong understanding of superscalar architectures, multithreading, and vector/SIMD machines
  • Extensive knowledge of branch prediction, speculation, exception handling, and architectural state management
  • Experience with massive parallelism architectures (GPU shader cores, vector processors, or similar)
  • Track record of shipping processor designs or significant architectural contributions
  • Strong publication record in computer architecture venues (ISCA, MICRO, ASPLOS, HPCA)
  • Excellent communication skills and ability to lead cross-functional technical discussions

Nice To Haves

  • GPU shader core design experience or deep familiarity with GPU microarchitecture
  • Experience with domain-specific accelerators (TPU, NPU, DSP, or similar)
  • Knowledge of ML workload characteristics and accelerator design patterns
  • Familiarity with near-memory computing, in-memory computing, or optical computing paradigms
  • Experience with custom instruction set design and compiler co-design
  • Background in power-efficient microarchitecture techniques
  • Understanding of datacenter processor requirements and interconnect technologies
  • Experience with vector processor architectures (Cray, NEC SX, ARM SVE, RISC-V Vector)

Responsibilities

  • Lead the architectural design of custom processor cores for Neurophos OPUs, balancing performance, power, and area constraints
  • Define microarchitectural features, including pipeline organization, execution units, vector/SIMD capabilities, and memory hierarchies
  • Design for massive-scale parallelism, drawing on GPU shader core and vector processor principles
  • Architect instruction sets, control flow mechanisms, branch prediction strategies, and exception handling
  • Evaluate and implement in-order vs. out-of-order execution, superscalar techniques, and multithreading approaches
  • Collaborate with optical engine designers to optimize the processor-accelerator interface
  • Work with modeling teams to validate architectural decisions through performance simulation
  • Drive co-design with compiler and runtime software teams to ensure efficient code generation
  • Publish research and represent Neurophos in the computer architecture community
  • Mentor junior architects and establish architectural best practices

Benefits

  • A pivotal role in an innovative startup redefining the future of AI hardware.
  • A collaborative and intellectually stimulating work environment.
  • Competitive compensation, including salary and equity options.
  • Opportunities for career growth and future team leadership.
  • Access to cutting-edge technology and state-of-the-art facilities.
  • Opportunity to publish research and contribute to the field of efficient AI inference.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service