About The Position

FieldAI is transforming how robots interact with the real world. We build risk-aware, reliable, field-ready AI systems that tackle the hardest problems in robotics and unlock the potential of embodied intelligence. We take a pragmatic approach that goes beyond off-the-shelf, purely data-driven methods or transformer-only architectures, combining cutting-edge research with real-world deployment. Our solutions are already deployed globally, and we continuously improve model performance through rapid iteration driven by real field use. In Pittsburgh, we’re pushing the frontier of embodied intelligence by designing robot learning systems that scale across tasks, environments, and robot embodiments. We work on robotics foundation models, from vision and language to control, and we deploy what we build on real robots solving real problems in unstructured, real-world settings. We are looking for an AI Research Engineer to advance robot learning and robotics foundation models at FieldAI. In this role, you will focus on developing learning-based methods that enable robots to acquire new skills and generalize across tasks, environments, and embodiments. Your work will span representation learning, reinforcement and imitation learning, and large-scale training of foundation models for robotics. This role is ideal for someone with a strong research mindset who enjoys working close to real robotic systems. You will collaborate with research scientists and engineers to translate learning research into deployed robot capabilities, directly impacting how robots operate in complex, unstructured real-world environments.

Requirements

  • MS, PhD, or equivalent industry experience in Robotics, AI/ML, Computer Science, or a related field.
  • Strong background in robot learning, reinforcement learning, imitation learning, or representation learning.
  • Experience with PyTorch and modern ML development workflows.
  • Hands-on experience working with real robotic systems.
  • Solid understanding of machine learning fundamentals and experimental methodology.
  • Ability to translate research ideas into practical, field-deployable solutions.
  • Strong collaboration and communication skills in interdisciplinary teams.

Nice To Haves

  • Publications in robotics or AI venues (CoRL, ICRA, IROS, NeurIPS, ICML, CVPR).
  • Experience working with vision-language or multimodal models in robotics.
  • Familiarity with ROS or ROS 2.
  • Background in sim-to-real transfer or data-centric robotics.
  • Contributions to open-source robotics or ML projects.

Responsibilities

  • Develop Robot Learning Algorithms
  • Design and train learning-based methods for skill acquisition and generalization.
  • Apply reinforcement learning, imitation learning, and representation learning to real robotic systems.
  • Integrate learned policies into autonomy stacks used on physical robots.
  • Advance Robotics Foundation Models
  • Leverage and adapt vision-language and multimodal models for robotics applications.
  • Explore pretraining, fine-tuning, and data-scaling strategies for embodied intelligence.
  • Contribute to embodiment-agnostic learning approaches that transfer across platforms.
  • Large-Scale Training and Evaluation
  • Build and run training pipelines using PyTorch and modern ML tooling.
  • Conduct experiments at scale and analyze performance across tasks and environments.
  • Support sim-to-real transfer and real-world validation of learned behaviors.
  • Deploy and Test on Real Robots
  • Collaborate with robotics engineers to deploy models on real hardware.
  • Validate learning-based approaches in unstructured, high-variance environments.
  • Iterate quickly based on field performance and collected data.
  • Research and Knowledge Sharing
  • Contribute to internal research discussions and technical reviews.
  • Where appropriate, contribute to publications, preprints, or open-source projects.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service