[ART Lab] Founding Head of AI

Cox ExponentialSan Mateo, CA
5d

About The Position

We're looking for our Founding Head of AI to architect and build the intelligence behind our product. You'll own the AI/ML stack from perception to interaction, working at the intersection of computer vision, 3D understanding, large language models, and embodied AI. This is a rare opportunity to define the AI architecture for a new product category and build foundational models from the ground up. You'll be hands-on from day one—writing code, training models, and shipping features—while also establishing the technical vision and team culture that will scale with us. As we grow, you'll transition into leading the AI team while remaining deeply technical. What You'll Build Robotic Perception Systems: Develop real-time perception pipelines using our camera and time-of-flight sensors for scene understanding, object detection, tracking, and spatial mapping. Vision-Language-Interaction Models (VLIs): Pioneer new architectures that integrate vision, language, and physical interaction—going beyond VLMs and VLAs to understand and respond to 3D environments and people. AI Agents for the Home: Build agentic systems that understand context, anticipate needs, and interact naturally. 3D Rendering: Create intelligent systems that understand room geometry and surface properties for optimal visual experiences. LLM/VLM Integration: Design and optimize language model architectures for on-device and hybrid inference, balancing capability with latency and privacy.

Requirements

  • Deep expertise in computer vision and robotic perception (SLAM, 3D reconstruction, sensor fusion)
  • Strong foundation in modern ML: transformers, attention mechanisms, VLMs, and LLMs
  • Experience training and deploying models in resource-constrained or real-time environments
  • Proficiency in Python and deep learning frameworks, some experience with ROS
  • Track record of shipping ML products or research that bridges vision and language
  • Comfort operating from 0→1: ambiguity energizes you, not paralyzes you

Nice To Haves

  • Candidates with experience in embodied AI, robotics research, or human-robot interaction
  • Background in 3D graphics or rendering
  • Experience with edge AI, model optimization, or on-device inference
  • Publications in top-tier ML/robotics venues (CVPR, ICCV, NeurIPS, CoRL, ICRA)
  • Previous founding team or early startup experience

Responsibilities

  • Develop real-time perception pipelines using our camera and time-of-flight sensors for scene understanding, object detection, tracking, and spatial mapping.
  • Pioneer new architectures that integrate vision, language, and physical interaction—going beyond VLMs and VLAs to understand and respond to 3D environments and people.
  • Build agentic systems that understand context, anticipate needs, and interact naturally.
  • Create intelligent systems that understand room geometry and surface properties for optimal visual experiences.
  • Design and optimize language model architectures for on-device and hybrid inference, balancing capability with latency and privacy.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service