About The Position

We are committed to building the core architecture for Artificial General Intelligence (AGI) systems that match or surpass human-level capabilities. As a key contributor to our core R&D team, you will help develop large-scale models with multimodal perception, autonomous learning, and reasoning abilities, driving their generalization to real-world applications. Our goal is to design a native multimodal system—capable of understanding and generating across vision, speech, and text—while interacting deeply with the environment to catalyze the transition from AGI to ASI (Artificial Super Intelligence).

Requirements

  • Expertise in Transformer-based architectures and their applications in language and multimodal domains.
  • Hands-on experience in building or optimizing billion-scale models; familiar with training paradigms such as SFT (Supervised Fine-tuning), RLHF (Reinforcement Learning with Human Feedback), and self-supervised learning.

Nice To Haves

  • Preferred qualifications include deep understanding or practical experience in one or more of the following areas:
  • Multimodal models (e.g., vision-language models, audio-video models)
  • Reinforcement learning and autonomous agent systems
  • Complex reasoning and planning (e.g., search + LLMs, world modeling)
  • Sparse modeling and dynamic routing mechanisms
  • Strong engineering and system thinking capabilities, with the ability to translate cutting-edge research into production-level AGI model systems.
  • Publications in top-tier conferences/journals such as NeurIPS, ICLR, CVPR, ACL, etc., are highly desirable.

Responsibilities

  • Design unified large model architectures with integrated capabilities in multimodal perception, reasoning, memory, and generation (across vision/audio/text).
  • Build systems that support continual learning, hierarchical memory, autonomous exploration, and self-evolution.
  • Advance the development of agent-based systems with autonomous task planning, cross-modal interaction, tool usage, and self-improvement capabilities.
  • Contribute deeply to the design of core components such as general representation learning, synchronized audio-visual modeling, world models, and sparse modeling.

Benefits

  • Employees hired for this position may be eligible for a sign on payment, relocation package, and restricted stock units, which will be evaluated on a case-by-case basis.
  • Subject to the terms and conditions of the plans in effect, hired applicants are also eligible for medical, dental, vision, life and disability benefits, and participation in the Company’s 401(k) plan.
  • The Employee is also eligible for up to 15 to 25 days of vacation per year (depending on the employee’s tenure), up to 13 days of holidays throughout the calendar year, and up to 10 days of paid sick leave per year.
  • Your benefits may be adjusted to reflect your location, employment status, duration of employment with the company, and position level.
  • Benefits may also be pro-rated for those who start working during the calendar year.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service