About The Position

At Red Hat, we believe the future of AI is open, and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. The Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading contributors and maintainers of the vLLM and LLM-D projects and inventors of state-of-the-art techniques for model quantization and sparsification, our team provides a stable platform for enterprises to build, optimize, and scale LLM deployments. As a Machine Learning Engineer focused on vLLM, you will be at the forefront of innovation, collaborating with our team to tackle the most pressing challenges in model performance and efficiency. In this role, you will build and maintain subsystems that allow vLLM to speak the language of tools. You will bridge the gap between probabilistic token generation and deterministic schema compliance, working directly on tool parsers to interpret raw model outputs and structured output engines to guide generation at the logit level. If you are someone who wants to contribute to solving challenging technical problems at the forefront of deep learning in the open source way, this is the role for you. Join us in shaping the future of AI Inference!

Requirements

  • Strong experience in Python and Pydantic
  • Strong understanding of LLM Inference Core Concepts, such as logits processing (ie. Logit Generation -> Sampling -> Decoding loop)
  • Deep familiarity with the OpenAI Chat Completions API specification
  • Deep familiarity with libraries like Outlines, XGrammar, Guidance, or Llama.cpp grammars
  • Proficiency with efficient parsing techniques (e.g., incremental parsing) is a strong plus
  • Proficiency with Jinja2 chat templates
  • Familiarity with Beam Search and Greedy Decoding in the context of constraints
  • Familiarity with LLM inference metrics and tradeoffs
  • Experience with tensor math libraries such as PyTorch is a strong plus
  • Strong communication skills with both technical and non-technical team members
  • BS, or MS in computer science or computer engineering, mathematics, or a related field; PhD in an ML-related domain is considered a plus

Responsibilities

  • Write robust Python and Pydantic, working on vLLM systems, high performance machine learning primitives, performance analysis and modeling, and numerical methods
  • Contribute to the design, development, and testing of function calling, tool calling parser, and structured output subsystems in vLLM
  • Participate in technical design discussions and provide innovative solutions to complex problems
  • Give thoughtful and prompt code reviews
  • Mentor and guide other engineers and foster a culture of continuous learning and innovation

Benefits

  • Comprehensive medical, dental, and vision coverage
  • Flexible Spending Account - healthcare and dependent care
  • Health Savings Account - high deductible medical plan
  • Retirement 401(k) with employer match
  • Paid time off and holidays
  • Paid parental leave plans for all new parents
  • Leave benefits including disability, paid family medical leave, and paid military leave
  • Additional benefits including employee stock purchase plan, family planning reimbursement, tuition reimbursement, transportation expense account, employee assistance program, and more!

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Number of Employees

501-1,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service