About The Position

Join Cisco’s CX AI Incubation Team as a Senior AI/ML DevOps Engineer and help productionize LLM/SLM capabilities for Intelligent Customer Experiences, across cloud and on-prem environments. In Cisco CX, you will build and operate scalable AI systems that move from prototype to production, powering delivery intelligence, network automation, infrastructure testing, and intelligence on edge. You will collaborate with product and engineering teams to deploy reliable, secure, and observable AI services, optimizing inference performance from CPU and small GPUs to large multi-GPU servers, including air-gapped and customer-managed deployments. Join Cisco’s Customer Experience (CX) AI Incubation team to build and run production-grade AI platforms and services that transform customer engagement and operational efficiency. You will focus on end-to-end AI DevOps for LLMs/SLMs, including on-prem inference packaging, runtime optimization, deployment automation, and model/service observability. This role requires strong software engineering, hands-on GPU inference experience, and a track record of operationalizing models at scale.

Requirements

  • Bachelor’s degree with 7+ years of related experience, or Master’s degree with 4+ years of related experience.
  • Experience in Python, Java or C++, and building production services for ML/AI workloads.
  • Experience with PyTorch/TensorFlow and tooling across the ML lifecycle (data pipelines, training, evaluation, deployment).
  • Experience deploying and operating NLP/Generative AI systems in production, including performance tuning and reliability practices.
  • Experience working in cross-functional teams, delivering in fast-paced environments, and communicating technical concepts clearly.

Nice To Haves

  • Proven experience productionizing LLMs/SLMs with GPU-backed inference and runtime optimization (quantization, batching, parallelism).
  • Hands-on experience with on-prem deployment patterns (air-gapped, customer-managed), including packaging, integration, and upgrade strategy.
  • Experience with AI infrastructure and MLOps/AI DevOps tooling (K8s, CI/CD, model registry, experiment tracking, observability).
  • Familiarity with inference engines and GPU profiling (vLLM, Triton, TensorRT-LLM, llama.cpp).
  • Exposure to edge deployments and resource-constrained inference environments.
  • Strong written and verbal communication skills, with the ability to contribute to design discussions and documentation.

Responsibilities

  • Productionize LLM/SLM-powered features by building robust model-serving and deployment pipelines (cloud + on-prem) with clear SLAs, monitoring, and rollback strategies.
  • Optimize inference performance across CPU, small GPUs, and large multi-GPU servers using quantization, batching, KV-cache strategies, and runtime tuning for cost and latency.
  • Package and integrate on-prem inference stacks (VM/containers) with customer environments, including secure configuration, versioning, and upgrade-safe deployments.
  • Design scalable serving architectures for generative AI (multi-tenant, secure, cost-aware), including capacity planning and performance benchmarking.
  • Build automated CI/CD for models and prompts: evaluation gates, regression testing, artifact management, and reproducible releases.
  • Implement model and service observability: latency/throughput metrics, quality drift signals, safety checks, and incident triage workflows.
  • Support training and fine-tuning workflows for LLMs/SLMs, including data curation, experiment tracking, and packaging models for production.
  • Partner with product and engineering to integrate AI services into applications, ensuring reliability, security, and responsible AI behavior.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service