About The Position

EnCharge AI is seeking an LLM Inference Deployment Engineer to optimize, deploy, and scale large language models (LLMs) for high-performance inference on its energy efficient AI accelerators. You will work at the intersection of AI frameworks, model optimization, and runtime execution to ensure efficient model execution and low-latency AI inference.

Requirements

  • Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or related field.
  • Experience in LLM inference deployment, model optimization, and runtime engineering.
  • Strong expertise in LLM inference frameworks (PyTorch, ONNX Runtime, vLLM, TensorRT-LLM, DeepSpeed).
  • In-depth knowledge of the Python programming language for model integration and performance tuning.
  • Strong understanding of high-level model representations and experience implementing framework-level optimizations for Generative AI use cases
  • Experience with containerized AI deployments (Docker, Kubernetes, Triton Inference Server, TensorFlow Serving, TorchServe).
  • Strong knowledge of LLM memory optimization strategies for long-context applications.
  • Experience with real-time LLM applications (chatbots, code generation, retrieval-augmented generation).

Responsibilities

  • Deploy and optimize LLMs (GPT, LLaMA, Mistral, Falcon, etc.) post-training from libraries like HuggingFace
  • Utilize inference runtimes such as ONNX Runtime, vLLM for efficient execution.
  • Optimize batching, caching, and tensor parallelism to improve LLM scalability in real-time applications.
  • Develop and maintain high-performance inference pipelines using Docker, Kubernetes, and other inference servers.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service