About The Position

1.Conduct research and development of Omni multimodal large models, including the design and construction of training data, foundational model algorithm design, optimization related to pre-training/SFT/RL, model capability evaluation, and exploration of downstream application scenarios. 2.Scientifically analyze challenges in R&D, identify bottlenecks in model performance, and devise solutions based on first principles to accelerate model development and iteration, ensuring competitiveness and leading-edge performance. 3.Explore diverse paradigms for achieving Omni-modal understanding and generation capabilities, research next-generation model architectures, and push the boundaries of multimodal models.

Requirements

  • Bachelor's degree (full-time preferred) or higher in Computer Science, Artificial Intelligence, Mathematics, or related fields; graduate degrees are prioritized.
  • Hands-on experience in large-scale multimodal data processing and high-quality data generation is highly preferred.
  • Solid foundation in deep learning algorithms and practical experience in large model development; familiarity with Diffusion Models and Autoregressive Models is advantageous.
  • Proficiency in underlying implementation details of deep learning networks and operators, model tuning for training/inference, CPU/GPU acceleration, and distributed training/inference optimization; practical experience is a plus.
  • Strong learning agility, communication skills, teamwork, and curiosity.

Nice To Haves

  • Publication in top-tier conferences or experience in cross-modal (e.g., audio-visual) research is preferred.
  • Participation in ACM or NOI competitions is highly valued.

Responsibilities

  • Conduct research and development of Omni multimodal large models, including the design and construction of training data, foundational model algorithm design, optimization related to pre-training/SFT/RL, model capability evaluation, and exploration of downstream application scenarios.
  • Scientifically analyze challenges in R&D, identify bottlenecks in model performance, and devise solutions based on first principles to accelerate model development and iteration, ensuring competitiveness and leading-edge performance.
  • Explore diverse paradigms for achieving Omni-modal understanding and generation capabilities, research next-generation model architectures, and push the boundaries of multimodal models.

Benefits

  • Employees hired for this position may be eligible for a sign on payment, relocation package, and restricted stock units, which will be evaluated on a case-by-case basis.
  • Subject to the terms and conditions of the plans in effect, hired applicants are also eligible for medical, dental, vision, life and disability benefits, and participation in the Company's 401(k) plan.
  • The Employee is also eligible for up to 15 to 25 days of vacation per year (depending on the employee's tenure), up to 13 days of holidays throughout the calendar year, and up to 10 days of paid sick leave per year.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Entry Level

Industry

Broadcasting and Content Providers

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service