Performance Modeling Engineer ~2

OpenAISan Francisco, CA
Hybrid

About The Position

OpenAI’s Hardware organization develops system and infrastructure solutions designed for the unique demands of advanced AI workloads. They work closely with architecture, infrastructure, and vendor teams to evaluate system performance and guide critical design decisions. The team focuses on building and applying performance modeling frameworks to understand system behavior, quantify tradeoffs, and support next-generation infrastructure design. The Performance Modeling Engineer will support the development and application of modeling tools used to evaluate AI system performance and inform architectural decisions. In this role, you will partner closely with Senior Performance Modeling Engineers and the Performance Modeling Lead to analyze system behavior, run simulations and analytical models, and help evaluate tradeoffs across compute, memory, networking, and storage. You will contribute to building modeling frameworks while developing a strong foundation in system architecture and AI infrastructure. This role is ideal for early-career engineers with 1–2 years of experience in software engineering, systems analysis, or performance modeling who are excited to grow in large-scale infrastructure and hardware/software systems. This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance.

Requirements

  • 1–2 years of experience in software engineering, systems modeling, performance analysis, or related technical work
  • Strong programming skills and experience building technical tools, scripts, or frameworks
  • Familiarity with system architecture fundamentals such as compute, memory, and networking
  • Ability to reason about system performance, bottlenecks, and scaling behavior
  • Strong analytical and problem-solving skills with comfort working in quantitative environments
  • Ability to learn quickly and work effectively across technical teams

Nice To Haves

  • Exposure to AI/ML workloads, distributed systems, or large-scale infrastructure
  • Experience with simulation tools, benchmarking, profiling, or performance analysis
  • Familiarity with data center systems, server architecture, or hardware platforms
  • Interest in system architecture and hardware/software co-design
  • Internship or early professional experience in performance engineering, infrastructure, or systems design

Responsibilities

  • Support the development and maintenance of performance modeling tools and frameworks
  • Assist in building models to evaluate system behavior across compute, memory, networking, and interconnect subsystems
  • Help analyze distributed system scaling behavior and identify performance bottlenecks
  • Run simulations and analytical models to support architecture and infrastructure decisions
  • Partner with senior engineers to evaluate design tradeoffs across hardware and system components
  • Interpret modeling outputs and help translate findings into clear recommendations
  • Validate models using benchmarking data and real system performance measurements
  • Improve modeling workflows, documentation, and usability for broader team adoption
  • Collaborate cross-functionally with hardware, infrastructure, and architecture teams
  • Continuously build technical depth across AI infrastructure, system architecture, and performance analysis

Benefits

  • relocation assistance
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service