GPU Kernel Engineer

SciforiumSan Francisco, CA
2d

About The Position

We are seeking a highly skilled GPU Kernel Engineer who is passionate about pushing the limits of performance on modern accelerators. In this role, you will design and optimize custom GPU kernels that power next-generation large-scale AI systems. You will work across the hardware–software stack, from low-level kernel development to integrating optimized ops into high-level ML frameworks used for large-scale training and inference. This role is ideal for someone who thrives at the intersection of GPU programming, systems engineering, and cutting-edge AI workloads, and who wants to make meaningful contributions to the efficiency and scalability of our ML platform.

Requirements

  • 5+ years of industry or research experience in GPU kernel development or high-performance computing.
  • Bachelor’s, Master’s, or PhD in Computer Science, Computer Engineering, Electrical Engineering, Applied Mathematics, or a related field.
  • Strong programming skills in C++, Python, and familiarity with ML frameworks.
  • Deep expertise in CUDA/ROCm, GPU memory models, and performance optimization strategies.
  • Hands-on experience with Triton and/or JAX Pallas for custom kernel development.
  • Strong understanding of PTX, GPU ASM, and low-level GPU execution.
  • Extensive experience writing and optimizing custom GPU kernels in C++ and PTX.
  • Proven ability to integrate low-level kernels into PyTorch, JAX, or similar frameworks.
  • Experience working with large-scale LLM workloads (training or inference).

Nice To Haves

  • Experience with AMD GPUs and ROCm optimization.
  • Familiarity with JAX FFI and custom ML operator development.
  • Experience with efficient model serving frameworks (e.g., vLLM, TensorRT).
  • Experience with TPUs, XLA, or similar accelerator programming environments.
  • Contributions to open-source ML systems, compilers, or GPU kernels.

Responsibilities

  • Design, implement, and optimize custom GPU kernels using C++, PTX, CUDA, ROCm, Triton, and/or JAX Pallas.
  • Profile and optimize end-to-end performance of ML operations, with a focus on large-scale LLM training and inference.
  • Integrate low-level GPU kernels into frameworks such as PyTorch, JAX, and custom internal runtimes.
  • Develop performance models, identify bottlenecks, and deliver kernel-level improvements that significantly accelerate AI workloads.
  • Collaborate with ML researchers, distributed systems engineers, and model-serving teams to optimize compute performance across the stack.
  • Work closely with hardware vendors (NVIDIA/AMD) and stay current on the latest GPU architecture capabilities and compiler/toolchain improvements.
  • Contribute to tooling, documentation, benchmarking suites, and testing frameworks to ensure correctness and performance reproducibility.

Benefits

  • Medical, dental, and vision insurance
  • 401k plan
  • Daily lunch, snacks, and beverages
  • Flexible time off
  • Competitive salary and equity
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service