The AGI (Artificial General Intelligence) Computing Lab is dedicated to solving the complex system-level challenges posed by the growing demands of future AI/ML workloads. Our team is committed to designing and developing scalable platforms that can effectively handle the computational and memory requirements of these workloads while minimizing energy consumption and maximizing performance. To achieve this goal, we collaborate closely with both hardware and software engineers to identify and address the unique challenges posed by AI/ML workloads and to explore new computing abstractions that can provide a better balance between the hardware and software components of our systems. Additionally, we continuously conduct research and development in emerging technologies and trends across memory, computing, interconnect, and AI/ML, ensuring that our platforms are always equipped to handle the most demanding workloads of the future. By working together as a dedicated and passionate team, we aim to revolutionize the way AI/ML applications are deployed and executed, ultimately contributing to the advancement of AGI in an affordable and sustainable manner. Join us in our passion to shape the future of computing! This role is being offered under the AGICL lab as a part of DSRA. We are a research-driven systems lab working at the intersection of large language models, accelerator hardware, and high-performance software stacks. Our mission is to design, prototype, and optimize next-generation AI systems through tight hardware–software co-design. Our team works hands-on with cutting-edge accelerator hardware, experimental memory systems, and emerging domain-specific languages (DSLs). We build and optimize a Triton-based software stack that pushes the limits of performance, efficiency, and scalability for modern LLM workloads. We are looking for a Senior AI Systems Engineer with deep experience in high performance Triton kernel development on modern accelerators. In this role, you will design, analyze, and optimize performance-critical kernels used in large scale LLM inference and training pipelines. You will work closely with hardware architects, compiler engineers, and ML researchers to identify performance bottlenecks, interpret profiling data, and co-design solutions that span software and hardware boundaries. This role is ideal for engineers who enjoy working close to the hardware stack while still reasoning deeply about model level abstractions. Location: Daily onsite presence at our San Jose, CA office / U.S. headquarters in alignment with our Flexible Work policy.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
Ph.D. or professional degree
Number of Employees
5,001-10,000 employees