Principal Silicon Performance Architect

MicrosoftRedmond, WA

About The Position

We are forming a small, agile engineering team to accelerate a new initiative focused on artificial intelligence (AI) performance - from micro-architecture exploration through end-to-end workload validation. You’ll work at the intersection of silicon, systems, and software, partnering cross-functionally with chip and system architects, as well as inference software engineers to drive data-backed design decisions and deliver step-function improvements in throughput, latency, and efficiency. As a Principal Silicon Performance Architect on this AI acceleration effort, you will own performance modeling and analysis for current and future AI workloads. You will translate hardware and software architectural ideas into simulator implementations, run rigorous experiments across design variants, and turn results into clear guidance for architecture and product decisions. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Requirements

  • Bachelor's Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to C/C++, Python OR equivalent experience.
  • Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.
  • Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud Background Check upon hire/transfer and every two years thereafter.

Nice To Haves

  • Advanced Degree (Master's, Ph.D.) in Electrical Engineering, Computer Engineering, or related field.
  • Experience in chip architecture or micro-architecture analysis at the logical level, including memory, functional units, memory controllers, and Input/Output (I/O) controllers.
  • Experience in performance engineering, including profiling, bottleneck analysis, experimental design, and micro-architecture trade-off analysis.
  • Experience using performance modeling or simulation to evaluate hardware/software trade-offs across chip, system, and software teams.
  • Experience with AI inference acceleration features and accelerator or Graphics Processing Unit (GPU) performance analysis.
  • Experience with the AI inference software stack, including compilers, runtimes, and model serving systems.
  • Experience modifying architectural simulators or performance modeling codebases (e.g., C, C++, or Python).

Responsibilities

  • Extend and adapt simulation infrastructure to model new micro-architecture innovations for AI inference.
  • Analyze performance for current and forward-looking AI inference workloads across latency, throughput, and efficiency dimensions.
  • Drive design-space exploration using AI-assisted workflows, automation, and large-scale experiment generation.
  • Communicate performance insights clearly and influence architecture decisions through data-driven recommendations.
  • Collaborate closely with chip, system, and software architects to propose, evaluate, and iterate on architectural variations.

Benefits

  • Certain roles may be eligible for benefits and other compensation.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service