Data & AI Platform Engineer

Armanino LLPChicago, IL
Hybrid

About The Position

This is an early-career engineering role focused on building, operating, and improving cloud data/analytics platforms (e.g., Microsoft Fabric, Snowflake, Databricks) and supporting BI delivery (e.g., Power BI, Tableau). You’ll contribute to platform reliability, CI/CD, environment setup, and operational runbooks—while learning best practices in security, governance, and deployment automation from senior engineers. Armanino delivers reporting, analytics, warehousing, and targeted ML/AI/GenAI solutions across Microsoft Cloud (Azure, M365, Dynamics, Power BI) and hybrid/on-prem environments. We partner with application teams and systems engineering to run secure, reliable, and well-governed platforms.

Requirements

  • Minimum 2 years’ experience in a data engineering, platform engineering, analytics engineering, or cloud operations role (internships/co-ops count).
  • Comfort with one or more of the following areas (not all): A cloud data platform: Snowflake (preferred) or Microsoft Fabric or Databricks
  • A BI platform: Power BI or Tableau
  • CI/CD concepts (Git, branching/PRs, basic pipelines)
  • Basic scripting capability in Python or PowerShell (or willingness to learn quickly).
  • Strong troubleshooting mindset: can break problems down, gather evidence, and communicate clearly.
  • Ability to document what you learned (runbooks, checklists, short “how-to” guides).
  • Exposure to cloud concepts: identity/access, resource organization, logging/monitoring.
  • Familiarity with SQL and performance basics (indexes/partitioning concepts, query plans at a high level).
  • Understanding of data governance concepts (PII, RLS/CLS, environment separation).
  • Ability to work onsite in any of our office or at a client site up to 50% of the time.

Nice To Haves

  • Any relevant certs (Azure fundamentals, Snowflake, Databricks, etc)
  • Demonstrated experience with AI/ML/GenAI enablement (model lifecycle, AI Search, Azure OpenAI integration, or MLOps).

Responsibilities

  • Support day-to-day operations for cloud data platforms (workspace/project setup, access requests, basic configuration, troubleshooting).
  • Assist with platform hygiene: organizing environments, documenting standards, and improving repeatability.
  • Help implement and maintain guardrails (naming/tagging conventions, access patterns, basic cost awareness).
  • Contribute to CI/CD workflows for data and analytics assets (pipelines, jobs, semantic models, reports), under mentorship.
  • Help maintain reusable templates/checklists for deployments (approvals, promotion steps, rollback notes, release documentation).
  • Assist teams with onboarding and “how-to” enablement across a core platform (Fabric or Snowflake or Databricks) plus a BI tool (Power BI or Tableau).
  • Support basic performance triage (query/job failures, refresh issues, common workspace capacity constraints) and escalate effectively when needed.
  • Help build and maintain runbooks (standard operating procedures, known issues, quick fixes, escalation paths).
  • Participate in incident response support (triage, notes, follow-ups) and contribute to preventative improvements.
  • Follow and reinforce least-privilege access practices and secure secret handling (e.g., Key Vault/secret scopes where applicable).
  • Assist with data governance basics (PII handling expectations, row-level access concepts, environment separation).

Benefits

  • Medical, dental, vision
  • Generous PTO plan and paid sick time
  • Flexible work arrangements
  • 401K with Profit Sharing
  • Wellness program
  • Generous parental leave
  • 11 paid holidays

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Entry Level

Education Level

No Education Listed

Number of Employees

501-1,000 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service