Etched is building AI chips that are hard-coded for individual model architectures. Our first product (Sohu) only supports transformers, but has an order of magnitude more throughput and lower latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep chain-of-thought reasoning. Software Engineer, LLM Infrastructure Transformer ASICs, like those built by Etched, dramatically improve time-to-first-token latency. For a large model like Llama-3-70B with 2048 input tokens, the TTFT will be single-digit milliseconds (we will announce performance figures publicly at our launch). However, single-digit millisecond latency means nothing if the rest of the serving stack takes 100+ ms, and customers actually use it (or adopt the optimizations into their own stack). You will help make both of these happen. You will work with our software team to build software for continuous batching, and write world-class interactive documentation (like Pytorch’s Run in Colab feature) to show customers how it works. You will get this software working on our pre-silicon platform, and port it over to work on the physical chips once they are done being fabbed. You will find creative, new ways to improve this latency - can we speculatively decode the user’s inputs? Can we pre-empt sequences if we run out of KV cache space and re-compute them later? Can we cache common pre-fills?
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed