You will integrate, optimize, and operate large-scale inference systems to power AI scientific research. You will build and maintain high-performance serving infrastructure that delivers low-latency, high-throughput access to large language models across thousands of GPUs. You will work closely with researchers and engineers to integrate cutting-edge inference into large-scale reinforcement learning workloads. You will build tools and directly support frontier-scale experiments to make Periodic Labs the world’s best AI + science lab. You will make contributions to open-source LLM inference software.