About The Position

Your Journey at Crowe Starts Here: At Crowe, you can build a meaningful and rewarding career. With real flexibility to balance work with life moments, you’re trusted to deliver results and make an impact. We embrace you for who you are, care for your well-being, and nurture your career. Everyone has equitable access to opportunities for career growth and leadership. Over our 80-year history, delivering excellent service through innovation has been a core part of our DNA across our audit, tax, and consulting groups. That’s why we continuously invest in innovative ideas, such as AI-enabled insights and technology-powered solutions, to enhance our services. Join us at Crowe and embark on a career where you can help shape the future of our industry. Job Description: About Crowe AI Transformation Everything we do is about making the future of human work more purposeful. We do this by leveraging state-of-the-art technologies, modern architecture, and industry experts to create AI-powered solutions that transform the way our clients do business. The new AI Transformation team will build on Crowe’s established AI foundation, furthering the capabilities of our Applied AI / Machine Learning team. By combining Generative AI, Machine Learning and Software Engineering, this team empowers Crowe clients to transform their business models through AI, irrespective of their current AI adoption stage. As a member of AI Transformation, you will help distinguish Crowe in the market and drive the firm’s technology and innovation strategy. The future is powered by AI, come build it with us. About the Team We invest in expertise. You’ll have the time, space, and support to go deep in your projects and build lasting technical and strategic mastery. You’ll work with developers, product stakeholders, and project managers as a trusted leader and domain expert. We believe in continuous growth. Our team is committed to professional development and knowledge-sharing. We protect balance. Our distributed team culture is grounded in trust and flexibility. We offer unlimited PTO, a flexible remote work policy, and a supportive environment that prioritizes sustainable, long-term performance. About the Role The AI DevOps and Cloud Infrastructure Manager lead teams responsible for designing, operating, and scaling AI/ML infrastructure, cloud platforms, and DevOps automation that support enterprise model training, inference, and generative AI workloads. This role is the strategy and execution of cloud-native, Kubernetes-based platforms that enable reliable, secure, and cost-efficient AI systems. As a manager, this position combines hands-on technical leadership with people management, delivery ownership, and strategic decision-making. The role oversees distributed compute environments, GPU clusters, CI/CD pipelines, and vector-search infrastructure while ensuring high availability, resilience, and compliance with security and responsible AI standards. The manager partners closely with AI engineering, data engineering, product, and security teams, serves as the primary technical owner for assigned initiatives, and communicates system risks, tradeoffs, and progress to leadership.

Requirements

  • 7+ years of professional experience in DevOps, cloud engineering, MLOps, or platform engineering.
  • 2+ years of experience in engineering leadership or senior technical leadership roles.
  • Expert proficiency with distributed cloud systems, Kubernetes, and infrastructure-as-code.
  • Advanced ability to troubleshoot infrastructure, networking, container, and deployment issues.
  • Proficiency in Python, Bash, or similar automation and scripting languages.
  • Strong understanding of monitoring, observability, and reliability engineering patterns.
  • Hands-on experience supporting infrastructure for ML or generative AI workloads.
  • Strong leadership, communication, and cross-functional collaboration skills.

Nice To Haves

  • Bachelor’s degree in computer science, engineering, cloud computing, or a related field.
  • Master’s degree in technical discipline.
  • Cloud and AI certifications, including Azure (AZ-900, AZ-104, AZ-305, AZ-700, AZ-800, AI-102) or equivalent AWS/GCP certifications.
  • Extensive experience with Kubernetes platforms (EKS, AKS, GKE) and cloud ML services (Azure ML, SageMaker).
  • Experience with GPU workload orchestration, optimization, and multi-tenant inference environments.
  • Expertise in observability and distributed tracing (Prometheus, Grafana, CloudWatch, OpenTelemetry).
  • Strong experience with Terraform and infrastructure governance at scale.
  • Familiarity with service mesh architectures (Istio, Linkerd) and advanced deployment patterns (blue/green, canary).
  • Advanced experience supporting generative AI platforms, including LLM inference runtimes (vLLM, TGI), RAG infrastructure, and vector databases (Pinecone, Weaviate, FAISS).
  • Experience operating fine-tuned LLMs (LoRA, QLoRA), managing GenAI CI/CD pipelines, and implementing hallucination, drift, and reliability monitoring.
  • Demonstrated ability to make strategic technical decisions within defined delivery and budget constraints.

Responsibilities

  • Leading engineering teams responsible for AI/ML infrastructure, cloud operations, and MLOps automation.
  • Defining cloud, Kubernetes, and infrastructure strategy to support scalable model training, inference, and generative AI platforms.
  • Guiding the design and operation of distributed compute environments, GPU clusters, and vector database infrastructure.
  • Overseeing CI/CD pipelines that automate model training, testing, deployment, monitoring, and lifecycle management.
  • Managing incident response, failure analysis, and reliability engineering across AI platforms.
  • Directing performance testing, capacity planning, and cost optimization for AI infrastructure.
  • Ensuring compliance with cloud security, IAM practices, governance requirements, and responsible AI frameworks.
  • Implementing multi-cloud resilience patterns, high availability, and automated failover for critical AI workloads.
  • Supporting platform modernization initiatives, including adoption of optimized LLM runtimes and new orchestration technologies.
  • Evaluating third-party infrastructure tools, GPU scheduling solutions, and platform enhancements.
  • Communicating system status, dependencies, risks, and technical decisions to senior leadership.
  • Managing 4–5 direct reports, including coaching, performance management, and career development.
  • Owning project delivery, including budget, timelines, and quality of outcomes.
  • Coordinating with sales and stakeholders on project sizing, feasibility, and strategic opportunities.
  • Driving continuous improvement initiatives to advance DevOps maturity and AI infrastructure operational readiness.

Benefits

  • real flexibility to balance work with life moments
  • unlimited PTO
  • flexible remote work policy
  • comprehensive total rewards package
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service