The Core Network Engineering team owns the end-to-end networking stack that connects OpenAI’s compute infrastructure — spanning global WAN/edge connectivity, data-center networking, and high-performance host/xPU networking used for large-scale training and inference workloads. This team is responsible for ensuring networking is never the bottleneck to model training efficiency, cluster reliability, or fleet expansion. They design and operate the systems that provide predictable, high-throughput, low-latency connectivity across some of the world’s most advanced AI infrastructure. We’re looking for engineers to help build and operate the networking foundation behind OpenAI’s frontier AI systems. Depending on your background and area of focus, you may work across host networking, datacenter fabrics, or global WAN infrastructure. The problems span low-level systems software, distributed infrastructure, protocol readiness, observability, performance engineering, automation, and large-scale network operations. You’ll work on systems where microseconds of latency, tail performance, and network reliability directly impact model training efficiency and production serving performance. This role is ideal for engineers who enjoy operating close to the hardware/software boundary and solving performance-critical infrastructure problems at massive scale.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed