There are still lots of open positions. Let's find the one that's right for you.
As a Distributed LLM Inference Engineer, you will help systems and optimizations that push the boundaries of performance for inference at large scale. This is an incredibly critical role to Anyscale as it allows us to provide market leading performance and price point for AI infrastructure. As part of this role, you will iterate very quickly with product teams to ship the end to end solutions for Batch and Online inference at high scale which will be used by Customers of Anyscale. You will work across the stack integrating Ray Data and LLM engine providing optimizations across the stack to provide low cost solutions for large scale ML inference. You will integrate with Open source software like VLLM, work closely with the community to adopt these techniques in Anyscale solutions, and also contribute improvements to open source. Additionally, you will follow the latest state-of-the-art in the open source and the research community, implementing and extending best practices.