At Red Hat, we are committed to advancing the future of AI through open source, aiming to bring the power of open-source LLMs and vLLM to every enterprise. Our AI Inference team focuses on accelerating AI for businesses and simplifying GenAI deployments. As key contributors to the vLLM project and pioneers in model quantization and sparsification techniques, we provide a robust platform for enterprises to build, optimize, and scale their LLM deployments. In this role, you will focus on model optimization algorithms, working closely with our product and research teams to develop state-of-the-art deep learning software. You will collaborate with technical and research teams to create LLM training and deployment pipelines, implement model compression algorithms, and bring deep learning research into production. This is an opportunity to tackle challenging technical problems at the forefront of deep learning within an open-source framework and shape the future of AI.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior