Research Engineer/Research Scientist, Audio

AnthropicSan Francisco, NY
6hHybrid

About The Position

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. Anthropic’s Audio team pushes the boundaries of what's possible with audio with large language models. We care about making safe, steerable, reliable systems that can understand and generate speech and audio, prioritizing not only naturalness but also steerability and robustness. As a researcher on the Audio team, you'll work across the full stack of audio ML, developing audio codecs and representations, sourcing and synthesizing high quality audio data, training large-scale speech language models and large audio diffusion models, and developing novel architectures for incorporating continuous signals into LLMs. Our team focuses primarily but not exclusively on speech, building advanced steerable systems spanning end-to-end conversational systems, speech and audio understanding models, and speech synthesis capabilities. The team works closely with many collaborators across pretraining, finetuning, reinforcement learning, production inference, and product to get advanced audio technologies from early research to high impact real-world deployments. You may be a good fit if you: Have hands-on experience with training audio models, whether that's conversational speech-to-speech, speech translation, speech recognition, text-to-speech, diarization, codecs, or generative audio models Genuinely enjoy both research and engineering work, and you'd describe your ideal split as roughly 50/50 rather than heavily weighted toward one or the other Are comfortable working across abstraction levels, from signal processing fundamentals to large-scale model training and inference optimization Have deep expertise with JAX, PyTorch, or large-scale distributed training, and can debug performance issues across the full stack Thrive in fast-moving environments where the most important problem might shift as we learn more about what works Communicate clearly and collaborate effectively; audio touches many parts of our systems, so you'll work closely with teams across the company Are passionate about building conversational AI that feels natural, steerable, and safe Care about the societal impacts of voice AI and want to help shape how these systems are developed responsibly

Requirements

  • Have hands-on experience with training audio models, whether that's conversational speech-to-speech, speech translation, speech recognition, text-to-speech, diarization, codecs, or generative audio models
  • Genuinely enjoy both research and engineering work, and you'd describe your ideal split as roughly 50/50 rather than heavily weighted toward one or the other
  • Are comfortable working across abstraction levels, from signal processing fundamentals to large-scale model training and inference optimization
  • Have deep expertise with JAX, PyTorch, or large-scale distributed training, and can debug performance issues across the full stack
  • Thrive in fast-moving environments where the most important problem might shift as we learn more about what works
  • Communicate clearly and collaborate effectively; audio touches many parts of our systems, so you'll work closely with teams across the company
  • Are passionate about building conversational AI that feels natural, steerable, and safe
  • Care about the societal impacts of voice AI and want to help shape how these systems are developed responsibly
  • We require at least a Bachelor's degree in a related field or equivalent experience.

Nice To Haves

  • Large language model pretraining and finetuning
  • Training diffusion models for image and audio generation
  • Reinforcement learning for large language models and diffusion models
  • End-to-end system optimization, from performance benchmarking to kernel optimization
  • GPUs, Kubernetes, PyTorch, or distributed training infrastructure

Benefits

  • competitive compensation and benefits
  • optional equity donation matching
  • generous vacation and parental leave
  • flexible working hours
  • a lovely office space in which to collaborate with colleagues
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service