Machine Learning Research Engineer

EtchedCupertino, CA
4hOnsite

About The Position

Etched is building AI chips that are hard-coded for individual model architectures. Our first product (Sohu) only supports transformers, but has an order of magnitude more throughput and lower latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep & parallel chain-of-thought reasoning agents. Etched Labs is the organization within Etched whose mission is to democratize generative AI, pushing the boundaries of what will be possible in a post-Sohu world.

Requirements

  • An ML Research background with interests in HW co-design
  • Experience with Python, Pytorch, and / or JAX
  • Familiarity with transformer model architectures and/or inference serving stacks (vLLM, SGLang, etc.) and/or experience working in distributed inference/training environments
  • Experience working cross-functionally in diverse software and hardware organizations

Nice To Haves

  • ML Systems Research and HW Co-design backgrounds
  • Published inference-time compute research and/or efficient ML research
  • Experience with Rust
  • Familiarity with GPU kernels, the CUDA compilation stack and related tools, or other hardware accelerators

Responsibilities

  • Propose and conduct novel research to achieve results on Sohu that are unviable on GPUs
  • Translate core mathematical operations from the most popular Transformer-based models into maximally performant instruction sequences for Sohu
  • Develop deep architectural knowledge informing best-in-the-world software performance on Sohu HW, collaborating with HW architects and designers.
  • Co-design and finetune emerging model architectures for highest efficiency on Sohu
  • Guide and contribute to the Sohu software stack, performance characterization tools, and runtime abstractions by implementing frontier models using Python and Rust.
  • Propose and implement a novel test time compute algorithm that leverages Sohu’s unique capabilities to unlock a product could never be achieved on a typical GPU
  • Implement diffusion models on Sohu to achieve GPU-impossible latencies that allow for real-time image generation
  • Optimize model instructions and scheduling algorithms to optimize for utilization, latency, throughput, and/or a mix of these metrics.
  • Implement model-specific inference-time acceleration techniques such as speculative decoding, tree search, KV cache sharing, priority scheduling, etc by interacting with the rest of the inference serving stack.

Benefits

  • Full medical, dental, and vision packages, with 100% of premium covered
  • Housing subsidy of $2,000/month for those living within walking distance of the office
  • Daily lunch and dinner in our office
  • Relocation support for those moving to Cupertino
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service