Sr. Systems Design Engineer

Advanced Micro Devices, IncSan Jose, CA
3dHybrid

About The Position

At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. SENIOR SYSTEMS DESIGN ENGINEER We are seeking a Senior Systems Design Engineer to develop and optimize ML operator kernels and dataflow pipelines for AMD's NPU platform. You will own the full lifecycle of ML operators — from kernel implementation and performance analysis to ONNX Runtime integration and NPU hardware integration. You will work with the very latest hardware and software technology as part of a core team of industry specialists. You will have full-stack visibility — from operator kernel development to silicon validation — on AMD's NPU shipping in millions of PCs. There are opportunities to publish and patent your work. THE PERSON: The ideal candidate combines deep systems-level expertise with hands-on ML inference experience. You thrive at the hardware-software boundary, are comfortable profiling and optimizing low-level code, and can drive complex cross-functional debug efforts to resolution. JOB DETAILS: Location: San Jose, CA, US Onsite/Hybrid: This role requires the candidate to work full time (40 hours a week), either in a hybrid or onsite work structure.

Requirements

  • Programming Languages: Strong proficiency in C/C++ and Python; experience with multithreading and concurrency
  • ML Knowledge: Familiarity with ML operators (GeMM, Conv, Softmax, Attention) and inference frameworks (PyTorch, ONNX Runtime)
  • System Knowledge: Understanding of computer architecture, memory hierarchies, cache behavior, and low-level hardware APIs
  • Tools & Platforms: Proficiency with Git, debuggers, and profilers; experience with Linux development environments
  • Masters or PhD degree in electrical or computer engineering
  • 3+ years of relevant industry experience with Masters or PhD degree.

Nice To Haves

  • Exposure to MLIR/LLVM compiler infrastructure
  • Experience with NPU/GPU/accelerator kernel development or SDK integration
  • Familiarity with quantization techniques (INT8, FP8) and accuracy debugging
  • Experience with spatial architectures, systolic arrays, or dataflow accelerators
  • Track record of publications or patents in ML systems, compilers, or computer architecture

Responsibilities

  • Design and optimize NPU ML operator kernels and dataflow libraries across multiple datatypes (Int8, FP8, FP16, BF16)
  • Profile operator and end-to-end model latency; identify bottlenecks and drive performance improvements
  • Integrate and validate ML models within the ONNX Runtime framework on NPU
  • Debug and resolve issues across the NPU compiler stack — from kernel correctness to system-level model accuracy
  • Develop tiling strategies and optimize DMA data movement for on-chip memory utilization
  • Perform roofline analysis and build performance models to guide kernel optimization
  • Collaborate with silicon teams on hardware-software co-design for next-generation NPU
  • Drive technical innovation in NPU kernel and dataflow development, including tooling, benchmarks, and methodology improvements
  • Debug and root-cause issues spanning silicon bring-up, validation, and production phases of SOC programs
  • Coordinate cross-functionally with compiler, runtime, and hardware teams to ensure features are validated and performance targets are met on schedule
  • Contribute to hardware/software co-design by engaging in modeling frameworks and architectural trade-off analysis

Benefits

  • AMD benefits at a glance.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service