At Red Hat we believe the future of AI is open and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. The Red Hat AI Inference Engineering team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading developers, maintainers of the vLLM and LLM-D projects, and inventors of state-of-the-art techniques for model quantization and sparsification, our team provides a stable platform for enterprises to build, optimize, and scale LLM deployments. As a Machine Learning Engineer focused on distributed vLLM infrastructure in the llm-d project, you will be at the forefront of innovation, collaborating with our team to tackle the most pressing challenges in scalable inference systems and Kubernetes-native deployments. Your work with machine learning, distributed systems, high performance computing, and cloud infrastructure will directly impact the development of our cutting-edge software platform, helping to shape the future of AI deployment and utilization. If you want to solve cutting edge problems at the intersection of deep learning, distributed systems, and cloud-native infrastructure the open-source way, this is the role for you. Join us in shaping the future of AI!
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
501-1,000 employees