Principal/Senior Principal Machine Learning Engineer, llama.cpp

Red RiverBoston, MA
98d$189,600 - $351,050

About The Position

At Red Hat we believe the future of AI is open and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading developers, maintainers of the vLLM and llm-d project, and inventors of state-of-the-art techniques for model quantization and sparsification, our team provides a stable platform for enterprises to build, optimize, and scale LLM deployments. As a Machine Learning Engineer focused on llama.cpp, you will be at the forefront of innovation, collaborating with our team to tackle the most pressing challenges in model performance and efficiency. Your work with machine learning and high performance computing will directly impact the development of our cutting-edge software platform, helping to shape the future of AI deployment and utilization. If you are someone who wants to contribute to solving challenging technical problems at the forefront of deep learning in the open source way, this is the role for you. Join us in shaping the future of AI!

Requirements

  • Extensive experience in writing high performance modern C++ code.
  • Strong experience with hardware acceleration libraries and backends: CUDA, Metal, Vulkan, or SYCL.
  • Strong fundamentals in machine learning and deep learning, with a deep understanding of transformer architectures and LLM inference.
  • Experience with performance profiling, benchmarking, and optimization techniques.
  • Proficient in Python.
  • Prior experience contributing to a major open-source project.

Responsibilities

  • Design and implement new features and optimizations for the llama.cpp core, including model architecture support, quantization techniques, and inference algorithms.
  • Optimize the codebase for various hardware backends, including CPU instruction sets, Apple Silicon (Metal), and other GPU technologies (CUDA, Vulkan, SYCL).
  • Conduct performance analysis and benchmarking to identify bottlenecks and propose solutions for improving latency and throughput.
  • Contribute to the design and evolution of core project components, such as the GGUF file format and the GGML tensor library.
  • Collaborate with the open-source community by reviewing pull requests, participating in technical discussions on GitHub, and providing guidance on best practices.

Benefits

  • Comprehensive medical, dental, and vision coverage
  • Flexible Spending Account - healthcare and dependent care
  • Health Savings Account - high deductible medical plan
  • Retirement 401(k) with employer match
  • Paid time off and holidays
  • Paid parental leave plans for all new parents
  • Leave benefits including disability, paid family medical leave, and paid military leave
  • Additional benefits including employee stock purchase plan, family planning reimbursement, tuition reimbursement, transportation expense account, employee assistance program, and more!
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service