Devops Principal Consultant

HEXAWAREUnited States,
Onsite

About The Position

The Azure Stack AI DevOps Specialist designs, implements, and manages CI/CD pipelines for AI and Machine Learning applications specifically hosted on Azure Stack infrastructure. You ensure that infrastructure is treated as code (IaC) and that AI models are seamlessly deployed, monitored, and retrained in hybrid cloud environments.

Requirements

  • Azure Stack Hub, Azure Stack Edge, Azure Stack HCI
  • Azure DevOps, GitHub Actions, Jenkins
  • Terraform, Bicep, ARM Templates, Ansible
  • Docker, Azure Kubernetes Service (AKS) on Stack
  • Azure Machine Learning, PyTorch, TensorFlow, MLflow
  • Python (crucial for AI), PowerShell, Bash
  • Connectivity Awareness: Design systems that can function in disconnected or low-bandwidth scenarios (common in Azure Stack environments).
  • Hardware Knowledge: Understanding the physical constraints of Azure Stack Edge (like FPGA or GPU capabilities) is necessary for optimizing AI models.
  • MLOps Focus: Managing the lifecycle of a "living" model that requires constant data feeding and retraining loops.

Responsibilities

  • Provisioning: Use Terraform or Bicep to automate the setup of Azure Stack Hub or Edge resources.
  • Scalability: Configure GPU-enabled nodes on Azure Stack to handle intensive AI/ML workloads.
  • Governance: Implement Azure Policy and Role-Based Access Control (RBAC) to maintain security across on-premises and cloud environments.
  • Automation: Build end-to-end pipelines using Azure Pipelines or GitHub Actions to automate model training, testing, and deployment.
  • Model Versioning: Manage model artifacts and datasets to ensure reproducibility of AI results.
  • Edge Deployment: Orchestrate the deployment of AI models to Azure Stack Edge devices using IoT Edge and Kubernetes (AKS).
  • Observability: Implement Azure Monitor and Application Insights to track the health of both the infrastructure and the AI model’s performance (e.g., detecting data drift).
  • Performance Tuning: Optimize resource allocation for containers running AI inference to reduce latency at the edge.
  • DevSecOps: Integrate security scanning into the pipeline to check for vulnerabilities in container images and AI libraries.
  • Data Residency: Ensure that AI processing complies with local data residency laws by keeping sensitive data on the Azure Stack Hub within the local datacenter.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service