AI Researcher

Archetype AISan Mateo, CA
1d

About The Position

Archetype AI is developing the world's first AI platform to bring AI into the real world. Formed by an exceptionally high-caliber team from Google, Archetype AI is building a foundation model for the physical world, a real-time multimodal LLM for real life, transforming real-world data into valuable insights and knowledge that people will be able to interact with naturally. It will help people in their real lives, not just online, because it understands the real-time physical environment and everything that happens in it. Supported by deep tech venture funds in Silicon Valley, Archetype AI is currently at the Series A stage and is progressing rapidly to develop technology for their next stage. This presents a unique and once-in-a-lifetime opportunity to be part of an exciting AI team at the beginning of their journey, located in the heart of Silicon Valley. Our team is headquartered in San Mateo, California, with team members throughout the US and Europe. We are actively growing, so if you are an exceptional candidate excited to work on the cutting edge of physical AI and don’t see a role that exactly fits you below you can contact us directly with your resume via jobsarchetypeaiio. We are building the next generation of foundation models for real-world sensor data. Our goal is to develop AI systems that can learn rich representations of complex environments from modalities such as RF signals, video, and other physical sensors, enabling new capabilities in physical world understanding and reasoning. We are looking for an AI Researcher to design and train large-scale models that learn directly from raw sensor streams. This role combines deep learning research, large-scale experimentation, and hands-on system building, with the opportunity to shape core technology used across multiple real-world applications. You will work on problems at the intersection of representation learning, multimodal modeling, and physical-world sensing.

Requirements

  • 8+ years of experience developing advanced ML/AI systems, with a focus on real-world sensor data.
  • Strong expertise in modern deep learning architectures, especially transformers, representation learning, and large-scale model training.
  • Strong research and experimentation skills, including designing and evaluating new approaches.
  • Excellent programming skills in Python, with deep learning frameworks such as PyTorch.
  • Ability to move quickly from idea → experiment → working prototype.
  • Comfortable working in a fast-paced, multidisciplinary environment with a distributed team.
  • Excellent written and verbal communication skills.

Nice To Haves

  • Experience working with RF signals, e.g. radar or wireless physical-layer data.
  • Familiarity with multimodal learning across vision, RF, audio, or other sensors.
  • Experience with self-supervised or contrastive learning for large unlabeled datasets.
  • Experience with large-scale training infrastructure or distributed training frameworks.

Responsibilities

  • Design and train foundation models for sensor data, including multimodal architectures combining RF, vision, and other sensing modalities.
  • Develop new approaches for representation learning from raw sensor streams.
  • Fine-tune and adapt pretrained models for specific datasets and customer use cases, understanding and navigating trade-offs between data availability, model capacity, and performance.
  • Identify model limitations, diagnose failure modes, and design experiments that drive measurable improvements.
  • Collaborate closely with engineers and product teams to translate research advances into production systems.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service