We are seeking an experienced AI Optimization Engineer to support large-scale AI/ML and Generative AI workloads for an enterprise environment. This role focuses on optimizing, deploying, and managing machine learning and large language models (LLMs) on GPU-accelerated HPC infrastructure. The ideal candidate will have strong experience in Python-based machine learning, deep learning frameworks, model optimization techniques, and scalable AI infrastructure. The engineer will work closely with AI, infrastructure, and DevOps teams to design efficient model training and inference pipelines, implement SLURM-based workload orchestration, and deploy containerized ML solutions in production environments. Responsibilities include optimizing model performance using techniques such as pruning, quantization, and knowledge distillation, managing inference workflows using Triton Inference Server, and monitoring system performance using Prometheus and Grafana. This role requires hands-on experience with HPC environments, GPU clusters, containerization technologies, and Linux system administration, along with strong knowledge of machine learning algorithms, deep learning architectures, and modern AI development tools. Experience with cloud platforms, vector embedding, and enterprise-scale AI deployments is highly preferred.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed
Number of Employees
11-50 employees