Principal Machine Learning Engineer

Red RiverBoston, MA
1d

About The Position

At Red Hat we believe the future of AI is open and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. Red Hat AI Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading developers, maintainers of the vLLM project, and inventors of state-of-the-art techniques for model quantization and sparsification, our team provides a stable platform for enterprises to build, optimize, and scale LLM deployments. As a Machine Learning Engineer focused on model optimization algorithms, you will work closely with our product and research teams to develop SOTA deep learning software. You will collaborate with our technical and research teams to develop LLM training and deployment pipelines, implement model compression algorithms, and productize deep learning research. If you are someone who enjoys bridging research and production, optimizing large models, and contributing to open-source AI tooling, this role is for you. Join us in shaping the future of AI!

Requirements

  • Strong understanding of machine learning and deep learning fundamentals with experience in one or more of LLM Inference Optimizations and NLP
  • Experience with tensor math libraries such as PyTorch and NumPy
  • Strong programming skills with proven experience implementing Python based machine learning solutions
  • Ability to develop and implement research ideas and algorithms
  • Experience with mathematical software, especially linear algebra
  • Understanding of Linear Algebra, Gradients, Probability, and Graph Theory
  • Strong communications skills with both technical and non-technical team members
  • BS, or MS in computer science or computer engineering or a related field.

Nice To Haves

  • A PhD in a ML related domain is considered a strong plus.

Responsibilities

  • Contribute to the design, development, and testing of various inference optimization algorithms in the LLM-compressor, Speculators, and vLLM projects.
  • Design, implement, and optimize model compression pipelines using techniques such as quantization and pruning.
  • Develop and maintain speculative decoding frameworks to improve inference speed while maintaining model accuracy.
  • Collaborate closely with research scientists to translate experimental ideas into robust, production-ready systems
  • Profile and optimize end-to-end LLM performance, including memory usage, latency, and throughput
  • Benchmark, evaluate, and implement strategies for optimal performance on target hardware
  • Build tools to streamline model training, evaluation, and deployment.
  • Participate in technical design discussions and propose innovative solutions to complex problems
  • Contribute to open-source projects, code reviews, and documentation; collaborate with internal and external contributors.
  • Mentor and guide team members, fostering a culture of continuous learning and innovation.
  • Stay current with LLM architectures, inference optimizations, quantization research, and CPU/GPU hardware advancements.

Benefits

  • Comprehensive medical, dental, and vision coverage
  • Flexible Spending Account - healthcare and dependent care
  • Health Savings Account - high deductible medical plan
  • Retirement 401(k) with employer match
  • Paid time off and holidays
  • Paid parental leave plans for all new parents
  • Leave benefits including disability, paid family medical leave, and paid military leave
  • Additional benefits including employee stock purchase plan, family planning reimbursement, tuition reimbursement, transportation expense account, employee assistance program, and more!
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service