AI Research Scientist, PAR Media

MetaMenlo Park, CA
8d

About The Position

We are seeking AI Researchers to join the Product and Applied Research (PAR) Media group within Meta Superintelligence Labs (MSL). As a member of the PAR Media group, you will drive innovation in image and video understanding, generation, and narrative creation at an unprecedented scale. We own the research, development and deployment of cutting edge multimodal models across Meta AI, FoA, and the entire Meta creator and developer ecosystem. Our work directly powers product roadmaps with flexible, state-of-the-art solutions designed to lead, not follow. We partner closely with AI product teams across Meta to translate our research into impactful, real-world experiences. This means we’re not just building technology…we’re building the future of how people create, communicate, and connect. If you’re passionate about advancing the future of AI-driven media experiences and eager to make a tangible impact on billions of users, we invite you to join us on this journey.

Requirements

  • Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
  • Has obtained a PhD in Computer Science, AI/ML, or a relevant technical field
  • 1+ year of industry research experience in LLM/NLP, computer vision, or related AI/ML models
  • Experience owning and/or driving complex technical projects from end-to-end
  • Skilled in model training, data, or inference & efficiency for image, video, and/or related multimodal models
  • Proficient in media generation, understanding, and/or grounding
  • Programming experience in Python and hands-on experience with frameworks like PyTorch or Spark
  • Demonstrated significant industry influence in the field of AI and/or recently published research in leading peer-reviewed conferences (e.g., ACL, NeurIPS, ICML, ICLR, AAAI, KDD, CVPR, ICCV)

Nice To Haves

  • Experience working on frontier-quality/state-of-the-art Large Media Models
  • First-author publications at top peer-reviewed conferences (e.g., ACL, NeurIPS, ICML, ICLR, AAAI, KDD, CVPR, ICCV)

Responsibilities

  • Contribute to the training of next-generation multimodal foundation models, advance their capabilities in understanding, generation, and grounding, and enable them for downstream product use-cases
  • Support creative data sourcing, high-quality pre/mid/post-training data curation, and scale and optimize data pipelines for multimodal large language models (LLMs)
  • Lead, collaborate, and execute on research that pushes forward the state of the art in multimodal reasoning and generation research, and prioritize research that can be directly applied to Meta’s product development

Benefits

  • bonus
  • equity
  • benefits
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service