AI Models, Product Manager

Cerebras SystemsSunnyvale, CA
3dHybrid

About The Position

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. Ready to optimize and launch the world’s top AI models on the world’s fastest inference? As a member of the Inference core product team at Cerebras, you’ll get to work with model labs and top customers to launch new groundbreaking models. You’ll review research, select optimizations, design quality evaluations, and curate go-to-market materials for new model launches. At Cerebras, our team is moving fast and bringing blazing fast inference to developers and customers.

Requirements

  • 5+ years of experience as a product manager, currently at or above the level of Senior PM.
  • 5+ years of total technical work experience (e.g. SWE, ML researcher, solution engineer).
  • Ability to thrive in a fast-paced, dynamic environment. With an entrepreneurial sense of ownership and ability to lead projects.
  • Knowledge and passion for the worlds of open-source models and generative AI research.
  • Knowledge of the community model ecosystem, including: PyTorch, Hugging Face, vLLM, and SGLang.
  • Highly motivated, independent, organized, and an effective communicator.
  • Comfortable using Python with the chat completions API, for basic model testing.

Nice To Haves

  • Product manager experience at a model training lab or a company that implements open-source models.
  • Experience working with customers in a solution engineering role.
  • Experience writing technical marketing assets and social media, with a growing portfolio.
  • Experience working in a cross-functional organization, and leading projects across multiple teams.
  • Experience writing model quality evaluations and system prompt harnesses.
  • Experience writing application code in use cases such as code generation or deep research search application.
  • Expertise on agentic flows and current LLM model family architectures.
  • Understanding of model compilers and optimization.
  • Contributor to communities like vLLM, SGLang, PyTorch, or Hugging Face transformers.
  • Experience with model optimization or compression methods like quantization.

Responsibilities

  • You will curate our model portfolio and quality
  • Drive the models portfolio: selecting strategic models to support and following research trends.
  • Collaborate with model labs and open-source model builders, to ensure early support, facilitate early access collaboration, and lead public launches.
  • Monitor customer + community feedback on model strengths, and design model evaluations to ensure top quality implementations.
  • Craft product marketing, design features and demos that showcase the models' strengths.
  • Support top customers to enable, optimize, and integrate their models into their products.
  • Select performance optimizations, based on goals and hardware strengths.
  • Drive exciting launches, working with model enablement and optimization engineering, deployment engineering, sales, and marketing.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service