About The Position

This is a unique opportunity to join a full-stack robotics R&D group where your work will influence materials, sensors, actuators, low-level control, and foundation models for robot learning. The ideal Research Scientist candidate will leverage their practical expertise in robot foundation models, tactile sensing, multimodal learning, and robotics systems to innovate on full-stack systems. This role requires a systems-oriented researcher who can drive technical direction, define strategy for scaling tactile data, and influence how foundation models are built for robotics. You will work in an applied research team and collaborate with robotics researchers, ML engineers, and a wide range of scientists across the organization.

Requirements

  • Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
  • PhD degree in Artificial Intelligence, Robotics, Machine Learning, or a related field
  • Experience with tactile sensing, multimodal learning, or sensor fusion for robotics
  • Experience with Vision-Language models (VLMs), Vision-Language-Action models (VLAs), or action-conditioned world models
  • Experience in robot learning, reinforcement learning, imitation learning, or sim-to-real transfer
  • Experience working and communicating cross-functionally in a team environment

Nice To Haves

  • Track record of impactful work on dexterous manipulation demonstrated through publications, patents, or deployed systems
  • 3+ years of industry experience in robot learning, reinforcement learning, world models, or embodied AI
  • Hands-on experience with tactile sensors for robotic manipulation
  • Demonstrated experience driving technical direction and influence cross-functional teams
  • Experience building and scaling data pipelines for robotics or multimodal ML systems
  • Systems engineering skills with a track record of building production-quality research infrastructure
  • Experience working with simulation frameworks (MuJoCo, Isaac) and real hardware
  • Experience with deep learning frameworks (PyTorch, TensorFlow) and Python
  • Experience working and communicating cross-functionally in a team environment

Responsibilities

  • Drive technical direction for leveraging, adapting, and fine-tuning VLAs or other robotic foundation models within the team
  • Define and execute our scaled tactile/multimodal data strategy, including collection, processing, and integration pipelines
  • Develop strategies for training and fine-tuning foundation models to effectively leverage tactile sensing data alongside vision, language, and other modalities
  • Plan and execute research aligned with long-term organizational objectives while identifying actionable intermediate milestones
  • Mentor other researchers and help grow the team's expertise
  • Represent the team externally through publications, conference presentations, and partnerships

Benefits

  • bonus
  • equity
  • benefits
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service