Machine Learning Engineer, API Multicloud

OpenAISan Francisco, CA

About The Position

OpenAI’s API Multicloud team, part of B2B Applications, is tasked with expanding OpenAI’s API platform into key cloud environments, beginning with AWS. Their goal is to safely and broadly distribute OpenAI’s API by enabling crucial API technologies within AWS-native environments, collaborating closely with Amazon and internal teams such as Codex, Research, Safety Systems, and Applied. The team focuses on integrating core developer and enterprise functionalities into cloud-native settings, including AWS-hosted Codex, model customization/post-training as a service, and new stateful runtime environments for agentic workloads. This work combines production ML systems, developer platforms, model behavior, and large-scale infrastructure. The role involves building and enhancing AI systems that assist strategic partners in adapting OpenAI models for critical use cases in cloud-native environments. This encompasses post-training workflows, evaluation, data pipelines, model behavior, and API/infrastructure integration. The engineer will bridge partner requirements and core ML systems, diagnosing issues in training and evaluation, and translating insights into platform improvements. Collaboration with Research, Applied, Safety Systems, infrastructure teams, and external technical partners is key to resolving complex model-performance challenges. Success in this role will enable strategic partners and internal teams to confidently improve model behavior, leading to measurable product enhancements and more reliable, scalable, and effective underlying systems.

Requirements

  • Master’s or PhD in Computer Science, Machine Learning, or a related field, or equivalent practical experience.
  • 7+ years of professional engineering experience in relevant ML, infrastructure, or product-driven engineering roles.
  • Strong ML engineering experience building, training, fine-tuning, evaluating, or deploying production AI systems, with hands-on experience in deep learning, transformer models, and frameworks like PyTorch or TensorFlow.
  • Familiarity with training and fine-tuning large language models, including methods like supervised fine-tuning, distillation, preference optimization, reinforcement learning, or other post-training techniques.
  • Strong software engineering fundamentals, including data structures, algorithms, systems design, and high-quality production code in Python, Rust, or similar languages.
  • Experience with model customization, evaluation systems, data pipelines, distributed systems, cloud infrastructure, or production ML platform tradeoffs.
  • Ability to operate across model behavior, APIs, and infrastructure, while collaborating closely with Research, Safety, product engineering, infrastructure, and external technical partners.
  • Comfort moving quickly through ambiguity, owning problems end-to-end, and learning whatever is needed to get the job done.

Nice To Haves

  • experience with AWS, Kubernetes, agents, tool use, runtime environments, AI developer platforms, or speech models.

Responsibilities

  • Partner with strategic customers and internal teams to define target model behaviors, diagnose failure modes, and translate real-world needs into training, evaluation, and system requirements.
  • Build and scale production ML systems for model customization, post-training, and fine-tuning-as-a-service workflows.
  • Investigate whether training and customization workflows are producing the intended outcomes, and identify changes to data, evaluation, training, or infrastructure that improve performance.
  • Partner with backend and infrastructure engineers to integrate ML capabilities into AWS-native API environments.
  • Feed learnings from partner deployments back into the platform by proposing and implementing improvements to post-training systems, tooling, APIs, and developer workflows.
  • Work closely with Research and Applied teams to bring model improvements, training workflows, and evaluation best practices into production.
  • Help design systems that allow strategic partners and enterprise customers to safely customize OpenAI models for high-value use cases.
  • Debug and improve complex systems spanning model behavior, training data, APIs, distributed infrastructure, and customer-facing product surfaces.
  • Operate with high ownership in a 0→1 environment where requirements are ambiguous, systems are evolving quickly, and reliability matters.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service