Systems Performance Engineer

MicronAustin, TX

About The Position

The engineer will work with senior engineers and researchers on AI training and inference systems, with a strong focus on LLM execution engines, data and KV‑cache management, and multi‑tier memory hierarchies across modern data‑center platforms. The role centers on end‑to‑end performance characterization and optimization of large‑scale AI workloads, spanning single‑node GPUs to rack‑scale inference deployments. Responsibilities include systems software development, workload engineering, performance analysis, and memory‑centric optimization for LLM training, serving, and agentic AI frameworks. The work emphasizes real customer inference and training workloads, emerging memory technologies (HBM, LP/DRAM, CXL, NVMe, remote memory fabrics), and the economics and token‑level efficiency of large‑scale inference systems. This role combines hands‑on engineering with applied systems research, directly influencing next‑generation AI platforms and memory‑driven system architectures.

Requirements

  • Bachelor’s or Master’s degree, or equivalent experience, in Computer Science, Electrical Engineering, or a related field
  • Strong foundation in operating systems, memory systems, parallel computing, or distributed systems
  • Proficiency in systems programming and analysis using C/C++ and Python
  • Experience working in Linux environments , including debugging, profiling, and automation
  • Solid understanding of modern server architectures , including GPUs, CPUs, cache hierarchies, NUMA, and memory subsystems
  • Experience analyzing performance data and reasoning about system‑level behavior
  • Strong written and verbal communication skills
  • Ability to work independently on scoped problems and collaboratively on larger system efforts

Nice To Haves

  • Experience with LLM training and inference systems , including execution runtimes and serving frameworks
  • Hands‑on experience with KV cache management , long‑context execution, or stateful inference workloads
  • Familiarity with GPU architectures and AI accelerators, including memory and interconnect behavior
  • Experience with multi‑tier memory systems , including HBM, LP/DRAM, CXL‑attached memory, NVMe, and remote/disaggregated memory
  • Experience profiling and optimizing AI inference pipelines , including batching, scheduling, and latency‑sensitive workloads
  • Familiarity with agentic AI frameworks , multi‑agent systems, or workflow‑based inference pipelines
  • Experience with distributed AI systems , rack‑scale deployments, or cluster‑level performance analysis
  • Exposure to memory or system simulators (e.g., gem5, Ramulator) or analytical performance modeling
  • Familiarity with containers, orchestration, and AI infrastructure stacks
  • Experience applying machine learning techniques to systems optimization or performance analysis

Responsibilities

  • Build, develop, and improve systems software tools for profiling, tracing, and analyzing LLM training and inference workloads
  • Design and evaluate KV‑cache and state‑management strategies for LLM serving, including reuse, eviction, compression, tiering, and lifecycle management
  • Build and extend benchmarking, simulation, and emulation frameworks for AI inference and training across heterogeneous memory tiers
  • Develop and evaluate data placement, migration, and prefetching algorithms across HBM, LP/DRAM, CXL memory pools, NVMe, and remote memory systems
  • Characterize and optimize LLM execution engines (prefill/decode), including attention behavior, batching strategies, and token‑level performance
  • Analyze rack‑scale and cluster‑scale inference deployments , focusing on throughput, latency, utilization, cost, and token economics
  • Develop workloads that reflect real customer AI systems , including LLM serving, agentic pipelines, retrieval‑augmented generation, multimodal inference, and long‑context workloads
  • Instrument and analyze performance across GPUs, CPUs, memory subsystems, interconnects, and storage , identifying end‑to‑end bottlenecks
  • Evaluate system interactions across OS, runtime layers, containerized deployments, and distributed inference stacks
  • Automate performance measurement, experimentation, and analysis workflows to improve repeatability and scale
  • Summarize findings into clear methodologies, internal reports, and technical presentations for engineering and leadership audiences
  • Collaborate across engineering, architecture, and research teams, and with external academic and industry partners
  • Provide actionable feedback to product, architecture, and platform teams to influence future AI systems and memory designs

Benefits

  • Choice of medical, dental and vision plans
  • Benefit programs that help protect your income if you are unable to work due to illness or injury
  • Paid family leave
  • Robust paid time-off program
  • Paid holidays
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service