Member of Technical Staff, ML Kernels

NetpremeBoston, MA
22dOnsite

About The Position

We are seeking a Member of Technical Staff, Machine Learning Kernels to design, optimize, and benchmark high-performance compute kernels for modern machine learning workloads. This role is for a deeply technical engineer who enjoys working close to hardware — writing CUDA kernels, investigating subtle performance artifacts, building benchmarks, and serving as a go-to expert on accelerator behavior. You will act as a hands-on performance specialist, partnering closely with research, systems, and infrastructure teams to unlock efficiency gains across GPUs today and other accelerators (e.g., TPU, Trainium) as we expand our hardware partnerships. This role will be performed onsite from one of our offices in Santa Clara, CA or Boston, MA.

Requirements

  • Strong experience writing and optimizing CUDA kernels or equivalent low-level accelerator code.
  • Deep understanding of GPU architecture, including memory systems, parallel execution, and performance tradeoffs.
  • Experience with performance profiling and benchmarking tools (e.g., Nsight Systems / Compute, nvprof, framework-level profilers).
  • Proficiency in C++ and low-level performance-oriented programming.
  • Ability to independently investigate ambiguous or poorly understood performance issues and drive them to resolution.
  • Comfortable switching between different hardware ecosystems and learning new accelerator stacks as needed.

Nice To Haves

  • Experience with ML framework internals (e.g., PyTorch, TensorFlow, XLA) and custom operator development.
  • Prior work with non-GPU accelerators such as TPU, Trainium, IPU, or similar.
  • Familiarity with mixed-precision and low-precision compute (e.g., FP16, BF16, FP8).
  • Contributions to open-source performance, systems, or ML infrastructure projects.

Responsibilities

  • Design, implement, and optimize high-performance ML kernels, primarily targeting GPUs (CUDA), with an emphasis on throughput, latency, and memory efficiency.
  • Profile, benchmark, and analyze performance across different hardware configurations, identifying bottlenecks and narrow artifacts.
  • Debug and reason about low-level performance issues involving memory hierarchy, scheduling, synchronization, and numerical formats.
  • Build and maintain benchmarking and evaluation tools to compare performance across GPUs and other accelerators.
  • Advise internal teams on GPU and accelerator performance characteristics, tradeoffs, and best practices.
  • Explore and prototype support for alternative accelerator platforms (e.g., TPU, Amazon Trainium) as partnerships and needs evolve.
  • Collaborate closely with ML researchers and systems engineers to translate algorithmic needs into efficient kernel implementations.

Benefits

  • Competitive salary commensurate with experience including base salary, performance-based bonus, and early stage equity grant
  • Comprehensive benefits including health, dental, vision, and life insurance
  • Well-equipped, sunny offices in Santa Clara, CA and Boston, MA
  • Relocation assistance and visa sponsorship
  • Perks include a daily lunch stipend, 401k match, and more
  • A collaborative, continuous-learning work environment with smart, dedicated colleagues engaged in developing the next generation of architecture for high-performance computing
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service