At Red Hat we believe the future of AI is open and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading developers, maintainers of the vLLM project, and inventors of state-of-the-art techniques for model quantization and sparsification, our team provides a stable platform for enterprises to build, optimize, and scale LLM deployments. As a Principal Machine Learning Engineer focused on vLLM, you will be at the forefront of innovation, collaborating with our team to tackle the most pressing challenges in model performance and efficiency. Your work with machine learning and high performance computing will directly impact the development of our cutting-edge software platform, helping to shape the future of AI deployment and utilization. If you are someone who wants to contribute to solving challenging technical problems at the forefront of deep learning in the open source way, this is the role for you. Join us in shaping the future of AI!