Tech Lead, Robotic AI Model

Faraday FutureEl Segundo, CA

About The Position

Faraday Future is a California-based technology company focused on the design, engineering, and development of intelligent, connected electric vehicles and related artificial intelligence–enabled technologies. Founded in 2014, the Company’s mission is to disrupt the automotive and technology industries by creating user-centric, technology-first experiences. The Company, together with its controlled subsidiaries, operates across multiple technology-driven areas, including AI electric vehicles, robotics, and its crypto business (AIXC), all under its upgraded Global EAI Industry Bridge Strategy, marking the beginning of a new chapter in AI mobility and Web3 integration. The Company aims to leverage the latest technologies and world’s best talent to realize exciting new possibilities across all of these lines. Faraday Future’s automotive business exemplifies its vision for luxury, innovation, and performance, while its FX strategy aims to introduce mass production models equipped with state-of-the-art luxury technology derived from the FF brand, targeted towards a broader market with middle-to-low price range offerings. FF is committed to redefining mobility through AI innovation. Join us in shaping the future of intelligent transportation and technology by creating something new, something connected, and something with a true global impact. As a leader in Robotics AI Model, you will own the critical pipeline that transforms pretrained foundation models into deployable robot policies — turning general-purpose AI into systems that can reliably manipulate objects, navigate environments, and perform complex physical tasks in the real world. This role sits at the intersection of embodied AI, robot learning, and foundation model adaptation. You will work across the full post-training lifecycle: curating demonstration data, fine-tuning vision-language-action (VLA) models or world models, training reinforcement learning policies in simulation, validating behaviors on real hardware, and optimizing models for on-robot inference. Your work will directly determine how capable, safe, and generalizable our robots are.

Requirements

  • Master’s or PhD in Robotics, Computer Science, Machine Learning, or a closely related field
  • 3+ years of hands-on experience in robot learning, including imitation learning, behavior cloning, or visuomotor policy training on real or simulated robots
  • Deep expertise in at least one post-training paradigm: SFT on robot demonstrations, RL-based policy optimization, or diffusion/flow-matching policy training
  • Strong PyTorch skills with experience training and debugging models at scale; familiarity with distributed training (FSDP, DeepSpeed)
  • Practical experience with robot simulation platforms (Isaac Sim, MuJoCo, PyBullet, or SAPIEN) and sim-to-real workflows
  • Understanding of action representations for robotics: continuous control, discrete tokenization, action chunking, and diffusion-based action generation
  • Solid Python engineering; comfortable working with ROS/ROS2, real-time control systems, and robot hardware integration
  • Ability to independently drive projects from research prototype to real-robot deployment

Nice To Haves

  • Experience fine-tuning VLA models such as π₀, OpenVLA, RT-2, Octo, or similar generalist robot policies
  • Hands-on experience with real robot platforms: humanoids, bi-manual arms (ALOHA), mobile manipulators, or dexterous hands
  • Experience with large-scale teleoperation data collection systems and robot fleet management
  • Familiarity with RLHF/DPO/GRPO applied to robotic policy alignment and human preference learning
  • Experience building or contributing to robot learning infrastructure (LeRobot, robomimic, openpi, etc.)
  • Publications at top robotics or ML venues (CoRL, RSS, ICRA, NeurIPS, ICML, ICLR)
  • Knowledge of on-device model optimization: TensorRT, ONNX Runtime, model pruning, and edge deployment for embodied AI

Responsibilities

  • Design and execute post-training pipelines for VLA and visuomotor policy models (e.g., diffusion policies, ACT, flow matching), including supervised fine-tuning (SFT), reinforcement learning (RL), and preference-based optimization
  • Fine-tune pretrained robot foundation models on task-specific demonstration datasets for dexterous manipulation, locomotion, whole-body control, and multi-step task sequencing
  • Develop and iterate on reward functions, verifiers, and RL training loops (PPO, GRPO, RLVR) to improve policy success rate and robustness in simulation and real-world deployment
  • Apply parameter-efficient fine-tuning methods (LoRA, QLoRA, OFT) to adapt large models to new tasks and robot embodiments under compute constraints
  • Build and manage large-scale robot demonstration data pipelines: teleoperation data collection, action tokenization (e.g., FAST tokenizer), data augmentation, quality filtering, and dataset versioning
  • Define data collection strategies across robot platforms, collaborating with robot operators and data labeling teams to ensure dataset diversity and coverage
  • Integrate multi-modal sensory data (RGB, depth, proprioception, force/torque, tactile) into coherent training datasets
  • Build and maintain simulation environments (Isaac Sim, MuJoCo, SAPIEN) for scalable policy training, including domain randomization, asset generation, and task definition
  • Address sim-to-real transfer challenges through visual augmentation, action space calibration, dynamics randomization, and systematic real-world validation
  • Design and run large-scale distributed RL training across GPU clusters for locomotion and manipulation policies
  • Build evaluation and benchmarking infrastructure: automated success-rate tracking, sim evaluation harnesses, real-robot A/B testing, and regression monitoring
  • Optimize models for on-robot inference: quantization (INT8/FP8), action chunking, latency reduction, and real-time control loop integration
  • Collaborate with controls, perception, and hardware teams to integrate learned policies into the full robot software stack
  • Track and adopt state-of-the-art research in robot foundation models, generalist policies, and embodied AI post-training (e.g., π₀/π₀.5, OpenVLA OFT, RT-2, Octo, Helix)
  • Contribute to internal research efforts on topics such as multi-embodiment transfer, long-horizon task learning, open-world generalization, and human-in-the-loop policy improvement

Benefits

  • Healthcare + dental + vision benefits (Free for you/discounted for family)
  • 401(k) options
  • Casual dress code + relaxed work environment
  • Culturally diverse, progressive atmosphere
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service