Multimodal LLMs Research Engineer

AppleSunnyvale, CA

About The Position

We are actively seeking exceptional individuals who thrive in collaborative environments and are driven to push the boundaries of what is currently achievable with multimodal inputs and large language models. Our centralized applied research and engineering group is dedicated to developing cutting-edge Computer Vision and Machine Perception technologies across Apple products. We balance advanced research with product delivery, ensuring Apple quality and pioneering experiences. A successful candidate will possess deep expertise and hands-on experience across the full lifecycle of Multimodal LLM development, encompassing early ideation, data definition, model training, and fine-tuning. We are seeking a candidate with a proven track record—demonstrated through academic research, industry contributions, or a combination of both—in developing multimodal LLMs and advanced topics such as agentic AI, reasoning, and large-scale model evaluation. This role offers the opportunity to drive groundbreaking research projects, spanning foundational concepts to practical applications.

Requirements

  • Ph.D. with relevant research background, or Master of Science and a minimum of 2 years of relevant industry experience
  • Demonstrated track record through publications, patents, and/or shipping relevant features
  • Strong Python programming experience
  • Strong PyTorch and/or JAX programming experience
  • Ability to effectively utilize AI code development tools to accelerate the development process

Nice To Haves

  • Strong publication record in relevant venues, such as CVPR, ICCV, ECCV, NeurIPS, ICML, ICLR, etc.
  • Technical leadership experience - guiding technical efforts across diverse teams/individuals.
  • Experience in shipping MM-LLMs in products.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service