Apple Inc.-posted about 1 month ago
Full-time • Mid Level
Cupertino, CA
5,001-10,000 employees
Computer and Electronic Product Manufacturing

Join the team building the next generation of Apple Intelligence! The System Intelligence Machine Learning (SIML) organization is looking for a Machine Learning Research Engineer in the domain of multi-modal perception and reasoning. This is an opportunity to work at the core of Apple Intelligence, working across modalities such as vision, language, gesture, gaze, touch, etc. to enable highly intuitive and personalized intelligence experiences across the Apple ecosystem. This role requires experience in vision-language models, and ability to fine-tune/adapt/distill multi-modal LLMs. You will be part of a fast-paced, impact-driven Applied Research organization working on cutting-edge machine learning that is at the heart of the most loved features on Apple platforms, including Apple Intelligence, Camera, Photos, Visual Intelligence, and more! SELECTED REFERENCES TO OUR TEAM'S WORK: https://www.youtube.com/watch?v=GGMhQkHCjxou0026t=255s https://support.apple.com/guide/iphone/use-visual-intelligence-iph12eb1545e/iosAs a Machine Learning Research Engineer, you will help design and develop models and algorithms for multimodal perception and reasoning leveraging Vision-Language Models (VLMs) and Multimodal Large Language Models (MLLMs). You will collaborate with experienced researchers and engineers to explore new techniques, evaluate performance, and translate product needs into impactful ML solutions. Your work will contribute directly to user-facing features across billions of devices.

  • Contribute to the development and adaptation of AI/ML models for multimodal perception and reasoning
  • Innovate robust algorithms that integrate visual and language data for comprehensive understanding
  • Collaborate closely with cross-functional teams to translate product requirements into effective ML solutions.
  • Conduct hands-on experimentation, model training, and performance analysis
  • Communicate research outcomes effectively to technical and non-technical stakeholders, providing actionable insights.
  • Stay current with emerging methods in VLMs, MLLMs, and related areas
  • Proven track record of research contributions demonstrated through publications in top-tier conferences and journals.
  • Background in multi-modal reasoning, VLM, and MLLM research with impactful software projects.
  • Solid understanding of natural language processing (NLP) and computer vision fundamentals.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service