About The Position

1.Conduct research and development of Omni multimodal large models, including the design and construction of training data, foundational model algorithm design, optimization related to pre-training/SFT/RL, model capability evaluation, and exploration of downstream application scenarios. 2.Scientifically analyze challenges in R&D, identify bottlenecks in model performance, and devise solutions based on first principles to accelerate model development and iteration, ensuring competitiveness and leading-edge performance. 3.Explore diverse paradigms for achieving Omni-modal understanding and generation capabilities, research next-generation model architectures, and push the boundaries of multimodal models.

Requirements

  • Bachelor’s degree (full-time preferred) or higher in Computer Science, Artificial Intelligence, Mathematics, or related fields; graduate degrees are prioritized.
  • Hands-on experience in large-scale multimodal data processing and high-quality data generation is highly preferred.
  • Solid foundation in deep learning algorithms and practical experience in large model development; familiarity with Diffusion Models and Autoregressive Models is advantageous.
  • Publication in top-tier conferences or experience in cross-modal (e.g., audio-visual) research is preferred.
  • Proficiency in underlying implementation details of deep learning networks and operators, model tuning for training/inference, CPU/GPU acceleration, and distributed training/inference optimization; practical experience is a plus.
  • Participation in ACM or NOI competitions is highly valued.
  • Strong learning agility, communication skills, teamwork, and curiosity.

Responsibilities

  • Conduct research and development of Omni multimodal large models, including the design and construction of training data, foundational model algorithm design, optimization related to pre-training/SFT/RL, model capability evaluation, and exploration of downstream application scenarios.
  • Scientifically analyze challenges in R&D, identify bottlenecks in model performance, and devise solutions based on first principles to accelerate model development and iteration, ensuring competitiveness and leading-edge performance.
  • Explore diverse paradigms for achieving Omni-modal understanding and generation capabilities, research next-generation model architectures, and push the boundaries of multimodal models.

Benefits

  • This position will be eligible for 1 hour of paid sick leave for every 30 hours worked and up to 13 paid holidays throughout the calendar year.
  • Subject to the terms and conditions of the applicable plans then in effect, full-time interns are also eligible to enroll in the Company-sponsored medical plan.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service