Johnson Controls International (JCI) is looking for a Machine Learning / Platform Engineer to join our growing AI and Data Platform team. This role is pivotal in enabling enterprise-scale ML and generative AI capabilities by building secure, scalable, and automated infrastructure on Azure using Terraform and Azure DevOps. You’ll work at the intersection of ML, DevOps, and cloud engineering—building the foundation that supports real-time LLM inference, retraining, orchestration, and integration across JCI’s product and operations landscape. How you will do it ML Platform Engineering & MLOps (Azure-Focused) Build and manage end-to-end ML/LLM pipelines on Azure ML using Azure DevOps for CI/CD, testing, and release automation. Operationalize LLMs and generative AI solutions (e.g., GPT, LLaMA, Claude) with a focus on automation, security, and scalability. Develop and manage infrastructure as code using Terraform , including provisioning compute clusters (e.g., Azure Kubernetes Service, Azure Machine Learning compute), storage, and networking. Implement robust model lifecycle management (versioning, monitoring, drift detection) with Azure-native MLOps components. Infrastructure & Cloud Architecture Design highly available and performant serving environments for LLM inference using Azure Kubernetes Service (AKS) and Azure Functions or App Services . Build and manage RAG pipelines using vector databases (e.g., Azure Cognitive Search, Redis, FAISS) and orchestrate with tools like LangChain or Semantic Kernel . Ensure security, logging, role-based access control (RBAC), and audit trails are implemented consistently across environments. Automation & CI/CD Pipelines Build reusable Azure DevOps pipelines for deploying ML assets (data pre-processing, model training, evaluation, and inference services). Use Terraform to automate provisioning of Azure resources, ensuring consistent and compliant environments for data science and engineering teams. Integrate automated testing, linting, monitoring, and rollback mechanisms into the ML deployment pipeline. Collaboration & Enablement Work closely with Data Scientists, Cloud Engineers, and Product Teams to deliver production-ready AI features. Contribute to solution architecture for real-time and batch AI use cases, including conversational AI, enterprise search, and summarization tools powered by LLMs. Provide technical guidance on cost optimization, scalability patterns, and high-availability ML deployments.