About The Position

We are building large-scale, native multimodal model systems that jointly support vision, audio, and text to enable comprehensive perception and understanding of the physical world. You will join the core research team focused on speech and audio, contributing to the following key research areas: Develop general-purpose, end-to-end large speech models covering multilingual automatic speech recognition (ASR), speech translation, speech synthesis, paralinguistic understanding, and general audio understanding. Advance research on speech representation learning and encoder/decoder architectures to build unified acoustic representations for multi-task and multimodal applications. Explore representation alignment and fusion mechanisms between audio/speech and other modalities in large multimodal models, enabling joint modeling with image and text. Build and maintain high-quality multimodal speech datasets, including automatic annotation and data synthesis technologies.

Requirements

  • Ph.D. in Computer Science, Electrical Engineering, Artificial Intelligence, Linguistics, or a related field; or Master's degree with several years of relevant experience.
  • Solid understanding of speech and audio signal processing, acoustic modeling, language modeling, and large model architectures.
  • Proficient in one or more core speech system development pipelines such as ASR, TTS, or speech translation; experience with multilingual, multitask, or end-to-end systems is a plus.
  • Candidates with in-depth research or practical experience in the following areas are strongly preferred: Speech representation pretraining (e.g., HuBERT, Wav2Vec, Whisper), Multimodal alignment and cross-modal modeling (e.g., audio-visual-text), Experience driving state-of-the-art (SOTA) performance on audio understanding tasks with large models.
  • Proficient in deep learning frameworks such as PyTorch or TensorFlow; experience with large-scale training and distributed systems is a plus.
  • Familiar with Transformer-based architectures and their applications in speech and multimodal training/inference.

Responsibilities

  • Develop general-purpose, end-to-end large speech models covering multilingual automatic speech recognition (ASR), speech translation, speech synthesis, paralinguistic understanding, and general audio understanding.
  • Advance research on speech representation learning and encoder/decoder architectures to build unified acoustic representations for multi-task and multimodal applications.
  • Explore representation alignment and fusion mechanisms between audio/speech and other modalities in large multimodal models, enabling joint modeling with image and text.
  • Build and maintain high-quality multimodal speech datasets, including automatic annotation and data synthesis technologies.

Benefits

  • Eligible for a sign on payment, relocation package, and restricted stock units, evaluated on a case-by-case basis.
  • Eligible for medical, dental, vision, life and disability benefits.
  • Participation in the Company's 401(k) plan.
  • Eligible for up to 15 to 25 days of vacation per year (depending on the employee's tenure).
  • Eligible for up to 13 days of holidays throughout the calendar year.
  • Eligible for up to 10 days of paid sick leave per year.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Entry Level

Industry

Broadcasting and Content Providers

Education Level

Ph.D. or professional degree

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service