OpenAI’s infrastructure organization builds and operates the systems that power frontier AI workloads at global scale. As our compute footprint expands across first-party data centers, cloud providers, and strategic partners, efficient capacity planning and resource allocation become critical to delivering reliable and cost-effective compute. The Compute Optimization team sits at the intersection of engineering, operations, finance, and infrastructure strategy. We develop the models, decision systems, and planning frameworks that optimize how compute resources are deployed, scheduled, and scaled across a rapidly growing global environment. We are seeking Compute Optimization Researcher/Engineer to build the systems that maximize the value of OpenAI’s global compute capacity. In this role, you will work on high-impact optimization problems spanning capacity allocation, demand forecasting, cluster planning, workload placement, and infrastructure utilization. You will combine mathematical modeling, software systems, and cross-functional execution to improve how compute is planned and consumed across GPU clusters, networking, storage, and data center environments. This role is ideal for candidates with backgrounds in operations research, optimization, applied math, infrastructure systems, or large-scale capacity planning. This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Education Level
Ph.D. or professional degree
Number of Employees
1-10 employees