About The Position

The Seed Multimodal Interaction and World Model team is dedicated to developing models that boast human-level multimodal understanding and interaction capabilities. The team also aspires to advance the exploration and development of multimodal assistant products.

Responsibilities

  • Develop and evaluate unified modeling architectures for multimodal foundation models across vision, audio, and language
  • Contribute to building a shared representation space that supports both generation and understanding tasks
  • Explore architectural and optimization strategies to improve generalization across modalities and tasks
  • Collaborate with researchers working on generation, reasoning, and world modeling to scale and adapt models for real-world scenarios
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service