Sr. Principal Processor Architect

NeurophosAustin, TX
1dOnsite

About The Position

We are developing an ultra-high-performance, energy-efficient photonic AI inference system. We’re transforming AI computation with the first-ever metamaterial-based optical processing unit (OPU). As AI adoption accelerates, data centers face significant power and scalability challenges. Traditional solutions are struggling to keep up, leading to rapidly rising energy consumption and costs. We’re solving both problems with an OPU that integrates over one million micron-scale optical processing components on a single chip. This architecture will deliver up to 100 times the energy efficiency of existing solutions while significantly improving large-scale AI inference performance. We’ve assembled a world-class team of industry veterans and recently raised a $110M Series A led by Gates Frontier. Participants include M12 (Microsoft’s Venture Fund), Carbon Direct Capital, Aramco Ventures, Bosch Ventures, Tectonic Ventures, Space Capital, and others. We have also been recognized on the EE Times Silicon 100 list for several consecutive years. Join us and shape the future of optical computing! Location: San Francisco Bay Area or Austin, TX. Full-time onsite position. Position Overview: We are seeking a highly experienced Sr. Principal Processor Architect to lead the design of the processing core at the heart of our optical processing units (OPUs). This role is critical to defining the microarchitecture that bridges our revolutionary optical computing engines with efficient, scalable digital control and processing. The ideal candidate will bring deep expertise in advanced processor design, massive parallelism, and specialized accelerator architectures to create a novel compute platform optimized for AI inference workloads.

Requirements

  • PhD in Computer Science, Electrical Engineering, or related field with focus on computer architecture (or MS with equivalent experience)
  • 15+ years of experience in processor architecture and design
  • Deep expertise in pipelined processor design, including in-order and out-of-order (OoO) execution
  • Strong understanding of superscalar architectures, multithreading, and vector/SIMD machines
  • Extensive knowledge of branch prediction, speculation, exception handling, and architectural state management
  • Experience with massive parallelism architectures (GPU shader cores, vector processors, or similar)
  • Track record of shipping processor designs or significant architectural contributions
  • Strong publication record in computer architecture venues (ISCA, MICRO, ASPLOS, HPCA)
  • Excellent communication skills and ability to lead cross-functional technical discussions

Nice To Haves

  • GPU shader core design experience or deep familiarity with GPU microarchitecture
  • Experience with domain-specific accelerators (TPU, NPU, DSP, or similar)
  • Knowledge of ML workload characteristics and accelerator design patterns
  • Familiarity with near-memory computing, in-memory computing, or optical computing paradigms
  • Experience with custom instruction set design and compiler co-design
  • Background in power-efficient microarchitecture techniques
  • Understanding of datacenter processor requirements and interconnect technologies
  • Experience with vector processor architectures (Cray, NEC SX, ARM SVE, RISC-V Vector)

Responsibilities

  • Lead the architectural design of custom processor cores for Neurophos OPUs, balancing performance, power, and area constraints
  • Define microarchitectural features, including pipeline organization, execution units, vector/SIMD capabilities, and memory hierarchies
  • Design for massive-scale parallelism, drawing on GPU shader core and vector processor principles
  • Architect instruction sets, control flow mechanisms, branch prediction strategies, and exception handling
  • Evaluate and implement in-order vs. out-of-order execution, superscalar techniques, and multithreading approaches
  • Collaborate with optical engine designers to optimize the processor-accelerator interface
  • Work with modeling teams to validate architectural decisions through performance simulation
  • Drive co-design with compiler and runtime software teams to ensure efficient code generation
  • Publish research and represent Neurophos in the computer architecture community
  • Mentor junior architects and establish architectural best practices

Benefits

  • A pivotal role in an innovative startup redefining the future of AI hardware.
  • A collaborative and intellectually stimulating work environment.
  • Competitive compensation, including salary and equity options.
  • Opportunities for career growth and future team leadership.
  • Access to cutting-edge technology and state-of-the-art facilities.
  • Opportunity to publish research and contribute to the field of efficient AI inference.
  • This is a rare opportunity to work on a game-changing technology at the intersection of photonics and AI. As part of our elite team, you’ll contribute to a platform that redefines computational performance and accelerates the future of artificial intelligence. Be a key player in bringing this transformative innovation to the world.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service