Intel Corp.-posted 27 days ago
Full-time • Mid Level
Onsite • San Jose, CA
5,001-10,000 employees
Computing Infrastructure Providers, Data Processing, Web Hosting, and Related Services

Overview We are seeking a highly skilled Compiler Engineer with experience in MLIR (Multi-Level Intermediate Representation) and performance-critical code generation. The ideal candidate will focus on designing and implementing compiler infrastructure to generate high-performance kernels for AI, and machine learning. This role bridges advanced compiler technology with systems optimization, enabling domain-specific performance across heterogeneous architectures (GPUs and accelerators). Responsibilities: Compiler Development and Optimization Design and implement MLIR-based compiler passes for lowering, optimization, and code generation Build domain-specific dialects to represent compute kernels at multiple abstraction levels Develop performance-tuned transformation pipelines targeting vectorization, parallelization, and memory locality High-Performance Kernel Generation Generate and optimize kernels for linear algebra, convolution, and other math-intensive primitives Ensure cross-target portability while achieving near hand-tuned performance Collaborate with hardware teams to integrate backend-specific optimizations Performance Engineering Profile generated code and identify performance bottlenecks across architectures Implement optimizations for cache utilization, prefetching, and scheduling Contribute to auto-tuning strategies for workload-specific performance Collaboration and Research Work closely with ML researchers, system architects, and runtime engineers to co-design kernel generation strategies Stay up to date with developments in MLIR, LLVM, and compiler technologies Publish or contribute to open-source MLIR/LLVM communities where appropriate Qualifications: Minimum qualifications are required to be initially considered for this position. Preferred qualifications are in addition to the minimum requirements and are considered a plus factor in identifying top candidates.

  • Design and implement MLIR-based compiler passes for lowering, optimization, and code generation
  • Build domain-specific dialects to represent compute kernels at multiple abstraction levels
  • Develop performance-tuned transformation pipelines targeting vectorization, parallelization, and memory locality
  • Generate and optimize kernels for linear algebra, convolution, and other math-intensive primitives
  • Ensure cross-target portability while achieving near hand-tuned performance
  • Collaborate with hardware teams to integrate backend-specific optimizations
  • Profile generated code and identify performance bottlenecks across architectures
  • Implement optimizations for cache utilization, prefetching, and scheduling
  • Contribute to auto-tuning strategies for workload-specific performance
  • Work closely with ML researchers, system architects, and runtime engineers to co-design kernel generation strategies
  • Stay up to date with developments in MLIR, LLVM, and compiler technologies
  • Publish or contribute to open-source MLIR/LLVM communities where appropriate
  • Bachelor's and 7+ years of experience OR Master's degree and 4+ years of experience OR PhD degree and 2+ years of experience. The degree should be in Computer Science, Computer Engineering, Software Engineering, or related field
  • Compiler design and optimization (MLIR, LLVM, or equivalent)
  • Code generation and transformation passes
  • High-performance computing techniques: vectorization, loop optimizations, polyhedral transformations, and memory hierarchy optimization
  • Familiarity with machine learning workloads (e.g., matrix multiplications, convolutions)
  • Hands-on experience extending MLIR dialects or contributing to the MLIR ecosystem
  • Background in GPU programming models (CUDA, ROCm, SYCL) or AI accelerators
  • Knowledge of numerical linear algebra libraries (BLAS, cuDNN, MKL) and their performance characteristics
  • Experience with auto-tuning frameworks (e.g., TVM, Halide, Triton)
  • Track record of publications, patents, or contributions to open-source compiler projects
  • We offer a total compensation package that ranks among the best in the industry. It consists of competitive pay, stock, bonuses, as well as, benefit programs which include health, retirement, and vacation.
  • Find more information about all of our Amazing Benefits here: https://intel.wd1.myworkdayjobs.com/External/page/1025c144664a100150b4b1665c750003
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service