AWS's Trainium and Inferentia chips power the world's largest machine learning clusters. Our team builds C++ models of these custom SoCs that RTL designers, verification engineers, and software teams depend on throughout the silicon development lifecycle. We're looking for a modeling engineer to build and own models that directly impact how our chips are designed, verified, and brought to production. Why this role is interesting: - Your models are used to verify silicon before it's built — bugs you catch save months of schedule and millions of dollars - You'll work at the intersection of software engineering and chip design, with deep visibility into how custom ML accelerators are architected - As the team scales, there's a clear path into architectural modeling — using your models to influence chip design decisions, not just validate them - Small team, high ownership, direct impact on AWS's most strategic silicon programs No ML background needed. You'll learn the ML accelerator domain on the job. This role can be based in Cupertino, CA or Austin, TX.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed