About The Position

About Crowe AI Transformation Everything we do is about making the future of human work more purposeful. We do this by leveraging state-of-the-art technologies, modern architecture, and industry experts to create AI-powered solutions that transform the way our clients do business. The new AI Transformation team will build on Crowe’s established AI foundation, furthering the capabilities of our Applied AI / Machine Learning team. By combining Generative AI, Machine Learning and Software Engineering, this team empowers Crowe clients to transform their business models through AI, irrespective of their current AI adoption stage. As a member of AI Transformation, you will help distinguish Crowe in the market and drive the firm’s technology and innovation strategy. The future is powered by AI, come build it with us. About the Team We invest in expertise. You’ll have the time, space, and support to go deep in your projects and build lasting technical and strategic mastery. You’ll work with developers, product stakeholders, and project managers as a trusted leader and domain expert. We believe in continuous growth. Our team is committed to professional development and knowledge-sharing. We protect balance. Our distributed team culture is grounded in trust and flexibility. We offer unlimited PTO, a flexible remote work policy, and a supportive environment that prioritizes sustainable, long-term performance. About the Role The AI DevOps and Cloud Infrastructure Engineer I (Senior Staff) designs, builds, and operates scalable, secure, and highly automated cloud environments that support the training, deployment, monitoring, and continuous delivery of AI and machine learning systems. This role serves as a subject-matter expert in infrastructure automation, distributed compute orchestration, and cloud platform operations, ensuring AI workloads perform reliably across development, staging, and production environments. The engineer collaborates closely with AI engineering, MLOps, data engineering, platform, and security teams to define infrastructure requirements, improve observability, and support the performance demands of predictive and generative AI workloads. As a senior staff-level contributor, the role establishes best practices, evaluates emerging cloud and AI infrastructure tooling, and mentors’ junior engineers to advance DevOps maturity, reliability, and cost efficiency across the organization.

Requirements

  • 4+ years of experience in DevOps, cloud engineering, platform engineering, or infrastructure engineering.
  • Strong proficiency with Kubernetes, Docker, and cloud orchestration platforms.
  • Deep experience with CI/CD systems and deployment automation.
  • Demonstrated ability to debug distributed systems and cloud networking issues.
  • Proficiency in Python, Bash, or other automation/scripting languages.
  • Strong communication skills and ability to collaborate across engineering and security teams.
  • Willingness to travel occasionally for cross-functional planning and collaboration.

Nice To Haves

  • Bachelor’s degree in Computer Science, Cloud Engineering, Information Systems, or a related technical field, or equivalent experience.
  • Master’s degree in a technical discipline.
  • Experience enabling ML or AI workloads at scale in production environments.
  • Cloud and platform certifications, including Azure (AZ-900, AZ-104, AZ-305, AZ-700, AI-102) or equivalent AWS/GCP certifications.
  • Advanced experience with AWS (e.g., EKS, EC2, IAM, Lambda, SageMaker) and/or Azure (e.g., AKS, VMSS, Azure ML).
  • Experience with GPU orchestration and scaling strategies for AI workloads.
  • Expertise with Terraform or other infrastructure-as-code frameworks.
  • Hands-on experience with observability stacks such as Prometheus, Grafana, CloudWatch, and OpenTelemetry.
  • Experience deploying and operating generative AI workloads, including LLM inference autoscaling and RAG architectures.
  • Familiarity with vector database hosting (e.g., Pinecone, Weaviate, FAISS) and model-serving frameworks (e.g., Hugging Face TGI, vLLM, custom inference containers).
  • Experience building CI/CD pipelines for LLM fine-tuning workflows (e.g., LoRA, QLoRA, PEFT) and monitoring generative AI performance metrics such as latency, throughput, and hallucination rates.

Responsibilities

  • Architecting and maintaining cloud infrastructure for AI model training, inference services, and distributed compute workloads.
  • Implementing infrastructure-as-code (IaC) to automate provisioning, configuration, scaling, and lifecycle management of cloud resources.
  • Designing and operating CI/CD pipelines for automated model training, testing, and deployment of AI-enabled applications.
  • Optimizing Kubernetes clusters, GPU utilization, and compute scaling strategies to balance performance, reliability, and cost.
  • Integrating AI models, inference endpoints, and data pipelines into cloud-native platforms.
  • Developing monitoring, logging, alerting, and observability solutions using modern telemetry and tracing tools.
  • Troubleshooting issues across networking, containers, compute, storage, and model-serving layers.
  • Leading performance benchmarking, load testing, and reliability validation for AI systems.
  • Documenting infrastructure architectures, operational runbooks, and engineering standards.
  • Supporting automation for dataset ingestion, model versioning, artifact management, and ML testing.
  • Ensuring compliance with cloud security, identity management, encryption, and responsible AI guidelines.
  • Partnering with security teams to implement secure networking, IAM policies, and secrets management.
  • Providing technical mentorship, design reviews, and cloud best-practice guidance to junior engineers.
  • Evaluating new cloud services, platform capabilities, and AI infrastructure tooling for adoption.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service