AI Researcher (Audio/Voice)

Amplifier HealthSan Francisco, CA
2dOnsite

About The Position

We are Amplifier, and we have built the world’s first Large Acoustic Model (LAM), a foundation model that uses human voice to detect health conditions. This is sci-fi becoming reality. We have raised significant capital from top-tier investors to turn this technology into a massive new category in healthcare. We are looking for a researcher who is tired of the "publish or perish" cycle and wants to build intelligence that actually works in the real world. Let’s be clear about what we are signing up for. We are entering a phase of hyper-growth. We are pushing ourselves—and this technology—further than most would consider reasonable. We are doing this because we believe the outcome (saving lives at scale) is worth the intensity required to get there. We work in person in San Francisco. We believe that the hardest problems are solved at a whiteboard, not over a Zoom call. We want the energy, the speed, and the camaraderie that comes from being in the arena together. We move fast. We don't spend months on theoretical proofs. We hypothesize, we code, we train, and we validate. The feedback loop is immediate. We have fun. We are a small, tight-knit crew on an adventure. We work hard because we love the game, not because we have to. You will join our elite AI Research team to advance the state-of-the-art in acoustic modeling. You won't just be fine-tuning off-the-shelf models; you will be designing novel architectures that can extract clinical-grade biomarkers from raw audio waveforms. Novel Architectures: Voice is not text. You will push the boundaries of how Transformer architectures process long-range acoustic dependencies and non-verbal signals. Biomarker Discovery: You will design experiments to isolate specific acoustic features (jitter, shimmer, respiratory rate) that correlate with health conditions, often discovering signals that medical science hasn't yet quantified. Data Efficiency: We are building a foundation model. You will work on self-supervised learning techniques to leverage massive amounts of unlabeled audio data.

Requirements

  • Deep theoretical understanding of Deep Learning
  • Express your ideas in PyTorch
  • Understand the physics of sound
  • Know your way around DSP (Digital Signal Processing), STFTs, Mel-spectrograms, and the unique challenges of modeling raw audio.
  • Understand the math behind the attention mechanism and can modify it when standard approaches fail.
  • Want your work to result in a product used by millions, not just a citation in a journal.

Responsibilities

  • Advance the state-of-the-art in acoustic modeling
  • Design novel architectures that can extract clinical-grade biomarkers from raw audio waveforms
  • Push the boundaries of how Transformer architectures process long-range acoustic dependencies and non-verbal signals
  • Design experiments to isolate specific acoustic features (jitter, shimmer, respiratory rate) that correlate with health conditions, often discovering signals that medical science hasn't yet quantified.
  • Work on self-supervised learning techniques to leverage massive amounts of unlabeled audio data.

Benefits

  • Impact: The chance to build a product that literally saves lives.
  • Equity: Real ownership. We are early enough that your equity package has life-changing potential.
  • The Team: You will work directly with the Founders (Jeremy, Amit, Peh) and our AI research team. No middle management. Just builders.
  • Resources: We are well-capitalized (oversized Seed), giving us the compute resources (H100 clusters) we need to execute.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

Number of Employees

1-10 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service