At Toyota Research Institute (TRI), we’re on a mission to improve the quality of human life. We’re developing new tools and capabilities to amplify the human experience. To lead this transformative shift in mobility, we’ve built a world-class team in Automated Driving, Energy & Materials, Human-Centered AI, Human Interactive Driving, Large Behavior Models, and Robotics. The Mission To conduct cutting-edge research that will enable general-purpose robots to be reliably deployed at scale in human environments. The Challenge We envision a future where robots assist with household chores and cooking, aid the elderly in maintaining their independence, and enable people to spend more time on the activities they enjoy most. To achieve this, robots need to be able to operate reliably in messy, unstructured environments. Recent years have witnessed a surge in the use of foundation models in various application domains, particularly in robotics. These “large behavior models” (LBMs) are enhancing the abilities of autonomous robots to perform various complex tasks in open and interactive environments. TRI Robotics is at the forefront of this emerging field by applying insights from foundation models, including large-scale pre-training and generative deep learning. However, it remains a challenge to ensure the reliability of LBMs for large-scale deployment in diverse operating conditions. The Team We aim to make progress on some of the hardest scientific challenges around the safe and effective usage and development of machine learning algorithms within robotics. To this end, the research mission of the Trustworthy Learning under Uncertainty (TLU) team within the Robotics division is to enable the robust, reliable, and adaptive deployment of LBMs at scale in human environments. To guarantee dependable deployment at scale in the years to come, we are dedicated to enhancing trustworthiness of LBMs through three key principles, as detailed (i) ensuring objective assessment of policy performance (Rigorous Evaluation), (ii) improving the ability to detect and handle unknown situations and return to nominal performance (Failure Detection and Mitigation), and (iii) developing the capability to identify and adapt to new information (Active / Continual Learning). Our team has deep cross-functional expertise across controls, uncertainty-aware ML, statistics, and robotics. We measure our success in terms of algorithmic advancements in the state-of-the-art and publications of these results in high-impact journals and conferences. We value contributions of reproducible and usable open-source software. The Opportunity We’re looking for a driven research scientist or research engineer with a strong background in embodied machine learning and a “make it happen” mentality. Specifically, we are looking for expertise in a variety of areas such as Policy Evaluation, Failure Detection and Mitigation, and Active Learning in the context of Large Behavior Models (LBMs) for robotic manipulation. Our topics of interest include but are not limited to: Multi-Modal Foundation Models, Generative Modeling, Imitation Learning, Reinforcement Learning, Planning & Control, Statistics, Uncertainty Estimation, Out-of-Distribution Detection, Safety-Aware & Robust ML, (Inter)Active Learning, and Online / Continual Learning. The ideal candidate is able to conduct research independently, but also works well as part of a larger research team at the cutting edge of state-of-the-art robotics and machine learning. Experience with robots is preferred, particularly in the manipulation domain. If our mission of robust, reliable, and adaptive deployment of LBMs at scale in human environments resonates with you, reach out by submitting an application!
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
Ph.D. or professional degree
Number of Employees
101-250 employees