As a Distributed LLM Inference Engineer, you will help systems and optimizations that push the boundaries of performance for inference at large scale. This is an incredibly critical role to Anyscale as it allows us to achieve a market leading position for AI infrastructure. As part of this role, you will Iterate very quickly with product teams to ship the end to end solutions for Batch and Online inference at high scale which will be used by open-source Ray users and customers of Anyscale Work across the stack integrating Ray Data and LLM engine providing optimizations achieving low cost solutions for large scale ML inference Integrate with Open source software like vLLM, work closely with the community to adopt these techniques in Anyscale solutions, and also contribute improvements to open source Follow the latest state-of-the-art in the open source and the research community, implementing and extending best practices
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Education Level
No Education Listed