Toyota Research Institute-posted 11 months ago
$45 - $65/Yr
Intern
Hybrid • Los Altos, CA
101-250 employees
Professional, Scientific, and Technical Services

At Toyota Research Institute (TRI), we're on a mission to improve the quality of human life. We're developing new tools and capabilities to amplify the human experience. To lead this transformative shift in mobility, we've built a world-class team in Energy & Materials, Human-Centered AI, Human Interactive Driving, Large Behavioral Models, and Robotics. This is a summer 2025 paid 12-week internship opportunity. Please note that this internship will be a hybrid in-office role. The Mission We are working to create general-purpose robots capable of accomplishing a wide variety of dexterous tasks. To do this, our team is building general-purpose machine learning foundation models for dexterous robot manipulation. These models, which we call Large Behavior Models (LBMs), use generative AI techniques to produce robot action from sensor data and human request. To accomplish this, we are creating a large curriculum of embodied robot demonstration data and combining that data with a rich corpus of internet-scale text, image, and video data. We are also using high-quality simulation to augment real world robot data with procedurally-generated synthetic demonstrations. The Team The Large Behavior Models Team's charter is to push the frontiers of research in robotics and machine learning to develop the future capabilities required for general-purpose robots able to operate in unstructured environments such as homes! Our Computer Vision team is looking for Research Interns with experience in areas such as Generalizable Representation Learning, Video Understanding, Spatio-Temporal World Models, Unsupervised Object Discovery, Differentiable Rendering, Neural Implicit Representations, Generative Models and Self-Supervised Learning. We are aiming to push the boundaries of scene reconstruction methods to enable the safe and effective usage of large robotic fleets, simulation, and prior knowledge (geometry, physics shown experience, behavioral science), not only for automation but also for human augmentation, working towards Principle-Centric Artificial Intelligence (AI) for Embodied Foundation Models, in the context of Large Behavior Models (LBMs).

  • Conduct high-reaching research in Machine Learning that solves open problems of high practical value and validate it in real-world benchmarks and systems.
  • Push the boundaries of knowledge and the state of the art in ML areas including simulation, perception, prediction, and planning for autonomous driving and robotics.
  • Partner with a multidisciplinary team including other research scientists and engineers across the ML team, TRI, Toyota, and our university partners.
  • Stay up to date on the state-of-the-art in Machine Learning ideas and software.
  • Present results in verbal and written communications, internally, at top international venues, and via open source contributions to the community.
  • Participate in collaborations with our external research partners (e.g., Stanford, Berkeley, MIT, CMU, UMich).
  • Currently pursuing a Ph.D. in Machine Learning, Robotics, Computer Vision, or related fields.
  • Experienced in at least one key ML area among Computer Vision, ML theory, AI ethics, or related.
  • At least one publication at high-impact conferences/journals (CVPR, ICLR, NeurIPS, CoRL, ICML, RSS, ICRA, ICCV, ECCV, PAMI, IJCV, etc.) on the aforementioned topics or other evidence of pioneering research work (e.g., via open source contributions).
  • Some experience in scientific python, Unix, and a common DL framework (preferably PyTorch).
  • Ability to work in collaboration with other researchers and engineers to invent and develop interesting research ideas.
  • Passionate about large scale challenges in ML grounded in physical systems, especially in the space of robotics.
  • Reliable team-player with a big-picture mindset and depth, caring about openness and delivering with integrity.
  • Generous benefits package including vacation and sick time.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service