Senior Engineer, AI Systems

Samsung SemiconductorSan Jose, CA
2dOnsite

About The Position

The AGI (Artificial General Intelligence) Computing Lab is dedicated to solving the complex system-level challenges posed by the growing demands of future AI/ML workloads. Our team is committed to designing and developing scalable platforms that can effectively handle the computational and memory requirements of these workloads while minimizing energy consumption and maximizing performance. To achieve this goal, we collaborate closely with both hardware and software engineers to identify and address the unique challenges posed by AI/ML workloads and to explore new computing abstractions that can provide a better balance between the hardware and software components of our systems. Additionally, we continuously conduct research and development in emerging technologies and trends across memory, computing, interconnect, and AI/ML, ensuring that our platforms are always equipped to handle the most demanding workloads of the future. By working together as a dedicated and passionate team, we aim to revolutionize the way AI/ML applications are deployed and executed, ultimately contributing to the advancement of AGI in an affordable and sustainable manner. Join us in our passion to shape the future of computing! This role is being offered under the AGICL lab as a part of DSRA. We are a research-driven systems lab working at the intersection of large language models, accelerator hardware, and high-performance software stacks. Our mission is to design, prototype, and optimize next-generation AI systems through tight hardware–software co-design. Our team works hands-on with cutting-edge accelerator hardware, experimental memory systems, and emerging domain-specific languages (DSLs). We build and optimize a Triton-based software stack that pushes the limits of performance, efficiency, and scalability for modern LLM workloads. We are looking for a Senior AI Systems Engineer with deep experience in high performance Triton kernel development on modern accelerators. In this role, you will design, analyze, and optimize performance-critical kernels used in large scale LLM inference and training pipelines. You will work closely with hardware architects, compiler engineers, and ML researchers to identify performance bottlenecks, interpret profiling data, and co-design solutions that span software and hardware boundaries. This role is ideal for engineers who enjoy working close to the hardware stack while still reasoning deeply about model level abstractions. Location: Daily onsite presence at our San Jose, CA office / U.S. headquarters in alignment with our Flexible Work policy.

Requirements

  • Bachelor’s with 5+ years, or Master’s with 3+ years, or PhD's with 0+ years of industry experience.
  • Strong experience writing high-performance Triton kernels for GPUs or other accelerators.
  • Solid understanding of LLM fundamentals, including attention mechanisms, transformer architectures, and inference/training workflows.
  • Deep knowledge of accelerator hardware architecture, including: Memory hierarchies (HBM, SRAM, caches).
  • Proven ability to read and interpret profiling data and performance counters.
  • Experience diagnosing and resolving performance bottlenecks in kernel-level code.
  • Strong systems programming skills in Python and low-level performance-oriented programming paradigms.
  • Experience with hardware–software co-design or compiler-assisted optimization.
  • Familiarity with FlashAttention, fused kernels, MoE kernels, and different attention mechanisms.
  • Experience working with emerging or experimental domain-specific languages (DSLs) for accelerator programming.
  • Background in ML systems, compilers, or performance engineering.
  • Prior experience working with different accelerator backends (including but not limited to CUDA).
  • Ability to work effectively in cross-functional, research-oriented environments.
  • Strong analytical and problem-solving skills.
  • You’re inclusive, adapting your style to the situation and diverse global norms of our people.
  • An avid learner, you approach challenges with curiosity and resilience, seeking data to help build understanding.
  • You’re collaborative, building relationships, humbly offering support and openly welcoming approaches.
  • Innovative and creative, you proactively explore new ideas and adapt quickly to change.

Responsibilities

  • Design, implement, and optimize high-performance Triton kernels for LLM workloads on existing accelerators.
  • Analyze kernel performance using profiling tools; interpret metrics such as latency, throughput, occupancy, memory bandwidth, and compute utilization.
  • Identify performance bottlenecks in kernel design (e.g., memory access patterns, synchronization, tiling strategies) and propose concrete optimizations.
  • Work across the stack; from model architecture to kernel implementation—to ensure end-to-end performance efficiency.
  • Collaborate with hardware and compiler teams on hardware–software co-design, providing feedback that influences future accelerator and DSL designs.
  • Prototype and evaluate kernel optimizations using upcoming DSLs and experimental compiler flows.
  • Contribute to the evolution of a Triton-based software stack used for cutting-edge research and production-grade experimentation.
  • Document design decisions, performance trade-offs, and optimization strategies clearly for internal and external stakeholders.

Benefits

  • Opportunity to work on cutting-edge accelerator hardware and experimental software stacks.
  • Direct impact on the performance and design of next-generation AI systems.
  • A highly collaborative environment spanning hardware, systems, and ML research.
  • Flexibility to publish, prototype, and influence future hardware and software directions.
  • Give Back With a charitable giving match and frequent opportunities to get involved, we take an active role in supporting the community.
  • Enjoy Time Away You’ll start with 4+ weeks of paid time off a year, plus holidays and sick leave, to rest and recharge.
  • Care for Family Whatever family means to you, we want to support you along the way—including a stipend for fertility care or adoption, medical travel support, and virtual vet care for your fur babies.
  • Prioritize Emotional Wellness With on-demand apps and free confidential therapy sessions, you’ll have support no matter where you are.
  • Stay Fit Eating well and being active are important parts of a healthy life. Our onsite Café and gym, plus virtual classes, make it easier.
  • Embrace Flexibility Benefits are best when you have the space to use them. That’s why we facilitate a flexible environment so you can find the right balance for you.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service