Zyphra is an artificial intelligence company based in San Francisco, California. The Role: As a Research Engineer - AI Performance & Kernel Optimization , you will improve and optimize the performance of our large-scale language model training and inference stacks. You will work closely with our pretraining and inference teams to identify bottlenecks, design and implement highly optimized kernels, and push the limits of throughput, latency, and hardware utilization across a range of accelerator platforms. This role is suited for someone who enjoys deep systems work, cares about performance at every level of the stack, and is excited to translate low-level optimizations into meaningful gains for frontier-scale AI systems. You’ll Work Across: Kernel development and optimization for large-scale ML workloads, using any level of the stack from PTX/assembly to CUDA, HIP, Triton, or other GPU DSLs Performance tuning for training and inference stacks across GPUs and other accelerators Profiling and eliminating bottlenecks in memory movement, communication, scheduling, and compute utilization Optimizing distributed training and inference systems for large MoE models, including large-scale model parallelism Portability and optimization across non-NVIDIA hardware, with special interest in AMD hardware such as the MI300x and MI355x Collaboration with research and infrastructure teams to turn systems improvements into real-world model training and inference gains
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed