Meta Platforms-posted 4 months ago
Redmond, WA
5,001-10,000 employees
Broadcasting and Content Providers

Meta's Reality Labs Research (RL-R) brings together a world-class team of researchers, developers, and engineers to create the future of Mixed Reality (MR), Augmented Reality (AR), and Wearable Artificial Intelligence (AI). Within RL-R, the ACE team solves complex challenges in behavioral inference from sparse information. We leverage multimodal, egocentric data and cutting edge machine learning to deliver robust, efficient models that serve everyone. Our research provides core building blocks to unlock intuitive and helpful Wearable AI, empowering everyone to harness the superpowers of this emerging technology in their daily lives. We are looking for an experienced AI Research Scientist to join us in this initiative. In this role, you will work closely with Research Scientists and Engineers from across RL-R to develop novel state-of-the-art AI algorithms to infer human behavior patterns, with an emphasis on those that inform attention, cognition or emotion. Examples include longitudinal gaze behaviors, gestures, or vocal cues. You will develop end-to-end wearable AI experiential validation platforms using cutting edge generative AI and language models to validate the impact of these signals. Further, you will work with system engineers to optimize these models for efficiency and latency in constrained compute platforms. You will learn constantly, dive into new areas with unfamiliar technologies, and embrace the ambiguity of AR, VR, and AI problem solving. Together, we are going to build cutting-edge prototypes, technologies, and toolsets that can define a paradigm shift in how we interact with our surroundings. We invite you to step into the adventure of a lifetime, as we make science fiction real and change the world.

  • Develop novel state-of-the-art AI algorithms to infer human behavior patterns.
  • Focus on algorithms that inform attention, cognition, or emotion.
  • Develop end-to-end wearable AI experiential validation platforms.
  • Validate the impact of behavioral signals using generative AI and language models.
  • Optimize models for efficiency and latency in constrained compute platforms.
  • Collaborate with Research Scientists and Engineers across RL-R.
  • Engage in continuous learning and exploration of new technologies.
  • Bachelor's degree in Computer Science, Computer Engineering, or relevant technical field.
  • PhD degree in Computer Science, Human-Computer Interaction, or related field plus 2+ years of experience.
  • Proven track record of solving complex challenges with multimodal ML as demonstrated through grants, fellowships, patents, or publications at conferences like CVPR, NeurIPS, CHI, or equivalent.
  • 3+ years of experience with Python.
  • Experience with a common machine learning framework like PyTorch.
  • Experience with ML computer vision.
  • Experience with multimodal sensing platforms, data collection, multimodal signal processing and analysis.
  • Experience converting raw sensor streams into robust models solving complex tasks.
  • First or lead author publications in computer vision, machine learning or computer graphics peer-reviewed conferences.
  • Experience with biosignals, behavioral signals, or egocentric data from wearable sensors.
  • Experience with Multimodal Deep Learning approaches and research.
  • Experience with Large Language Models.
  • Experience with C++.
  • Experience with non-ML computer vision, especially with OpenCV.
  • Experience with 3D engines such as Unreal or Unity.
  • Experience working in Augmented Reality/Virtual Reality.
  • Experience developing software in a research environment.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service