About The Position

We are building large-scale, native multimodal model systems that jointly support vision, audio, and text to enable comprehensive perception and understanding of the physical world. You will join the core research team focused on speech and audio, contributing to the following key research areas: Develop general-purpose, end-to-end large speech models covering multilingual automatic speech recognition (ASR), speech translation, speech synthesis, paralinguistic understanding, and general audio understanding. Advance research on speech representation learning and encoder/decoder architectures to build unified acoustic representations for multi-task and multimodal applications. Explore representation alignment and fusion mechanisms between audio/speech and other modalities in large multimodal models, enabling joint modeling with image and text. Build and maintain high-quality multimodal speech datasets, including automatic annotation and data synthesis technologies.

Requirements

  • Ph.D. in Computer Science, Electrical Engineering, Artificial Intelligence, Linguistics, or a related field; or Master's degree with several years of relevant experience.
  • Solid understanding of speech and audio signal processing, acoustic modeling, language modeling, and large model architectures.
  • Proficient in one or more core speech system development pipelines such as ASR, TTS, or speech translation; experience with multilingual, multitask, or end-to-end systems is a plus.
  • Proficient in deep learning frameworks such as PyTorch or TensorFlow; experience with large-scale training and distributed systems is a plus.
  • Familiar with Transformer-based architectures and their applications in speech and multimodal training/inference.

Nice To Haves

  • Speech representation pretraining (e.g., HuBERT, Wav2Vec, Whisper)
  • Multimodal alignment and cross-modal modeling (e.g., audio-visual-text)
  • Experience driving state-of-the-art (SOTA) performance on audio understanding tasks with large models

Responsibilities

  • Develop general-purpose, end-to-end large speech models covering multilingual automatic speech recognition (ASR), speech translation, speech synthesis, paralinguistic understanding, and general audio understanding.
  • Advance research on speech representation learning and encoder/decoder architectures to build unified acoustic representations for multi-task and multimodal applications.
  • Explore representation alignment and fusion mechanisms between audio/speech and other modalities in large multimodal models, enabling joint modeling with image and text.
  • Build and maintain high-quality multimodal speech datasets, including automatic annotation and data synthesis technologies.

Benefits

  • Employees hired for this position may be eligible for a sign on payment, relocation package, and restricted stock units, which will be evaluated on a case-by-case basis.
  • Subject to the terms and conditions of the plans in effect, hired applicants are also eligible for medical, dental, vision, life and disability benefits, and participation in the Company's 401(k) plan.
  • The Employee is also eligible for up to 15 to 25 days of vacation per year (depending on the employee's tenure), up to 13 days of holidays throughout the calendar year, and up to 10 days of paid sick leave per year.
  • Your benefits may be adjusted to reflect your location, employment status, duration of employment with the company, and position level.
  • Benefits may also be pro-rated for those who start working during the calendar year.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

Broadcasting and Content Providers

Education Level

Ph.D. or professional degree

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service