Advisor II, DevOps Engineering

Phillips 66Bartlesville, OK
1d

About The Position

As an Advisor II, DevOps Engineering, you’ll play a hands-on role in enabling dependable, secure, and modern technology solutions that support business operations and digital progress. This position blends technical depth with collaboration, giving you the opportunity to apply DevOps best practices while working closely with cross-functional teams. If you enjoy solving complex problems, learning continuously, and contributing to meaningful outcomes, this role offers a strong platform for growth and impact.

Requirements

  • High School Diploma or GED equivalent
  • 1 or more years of hands-on experience in DevOps, SRE, Platform Engineering, Software Engineering, or a related technical role
  • 1or more years of hands-on MLOps experience: MLflow tracking/Model Registry, model packaging, evaluation gates, and deployment (batch or real-time)
  • Intermediate proficiency with Terraform (modules, workspaces, remote state), PowerShell/Python, and Azure (Entra ID, Key Vault, Storage, Networking, ADF or Fabric)
  • Working knowledge of DevOps practices, CI/CD concepts, and modern software development principles
  • Experience with at least one modern programming language (such as C#, .NET, Java, or Python)
  • Foundational understanding of databases and application architecture
  • Legally authorized to work in the job posting country

Nice To Haves

  • 5 or more years’ experience in DevOps/SRE/Platform Engineering, including cloud data platforms (Azure preferred)
  • 3 or more years’ experience building CI/CD with Azure DevOps (YAML) or equivalent; strong Git fundamentals
  • 2 or more years’ experience with Databricks (Jobs/Workflows, clusters/cluster policies, Repos, Jobs API, Unity Catalog)
  • Experience implementing or supporting CI/CD pipelines in production environments
  • Familiarity with cloud platforms such as Microsoft Azure or AWS
  • Exposure to DevSecOps practices, including security automation and compliance considerations
  • Experience with container technologies like Docker, Kubernetes, or similar tools
  • Knowledge of automated testing, monitoring, alerting, or infrastructure automation
  • Experience in regulated or industrial environments, such as oil and gas
  • Familiarity with AI-enabled services or modern integration platforms
  • Experience using low-code or workflow automation tools such as Microsoft Power Platform
  • Ability to work independently on defined tasks while collaborating effectively with cross-functional teams
  • Strong problem-solving skills with the ability to analyze issues and contribute to practical solutions

Responsibilities

  • Design, implement, and maintain CI/CD pipelines (YAML) for Databricks notebooks/jobs, Delta pipelines, ADF/Fabric Data Pipelines, and data platform infrastructure.
  • Implement progressive deployments (Dev → test rings → Prod) with automated validations, approvals, and rollbacks.
  • Build and maintain Infrastructure-as-Code (Terraform) and standardized modules for repeatable environment provisioning (Databricks workspaces, clusters/cluster policies, SQL Warehouses, ADF/Fabric, Storage, Key Vault, networking).
  • Operationalize Databricks workloads: Jobs/Workflows, cluster policies, libraries, repos, Unity Catalog objects, and workspace governance; drive cost/perf optimization (autoscaling, spot usage, guardrails).
  • Create and maintain release automation for SQL objects, configuration tables, and data pipelines; standardize change validation and post-deploy checks.
  • Establish runbooks, golden-path templates, and self-service patterns for data engineering teams.
  • Stand up and administer MLflow Tracking and Model Registry; define model versioning, promotion, and governance workflows.
  • Build CI/CD for ML: automate training jobs, model packaging, evaluation gates (data/accuracy/performance), and deployment to batch/real-time serving endpoints.
  • Integrate data & model quality checks (drift, skew, SLA/SLO alerts) and route signals to the right teams; partner with DS/DE to drive continuous improvement.
  • Instrument observability (Azure Monitor/Log Analytics, Databricks metrics, pipeline telemetry); define actionable alerts and SLOs for platform and jobs.
  • Lead incident response for data platform pipelines; reduce toil by converting fixes into automation/tests.
  • Align with governance (Unity Catalog, Purview/Fabric governance) and change management; ensure auditability of infra and releases.

Benefits

  • Annual Variable Cash Incentive Program (VCIP) bonus
  • 8% 401k company match
  • Cash Balance Account pension
  • Medical, Dental, and Vision benefits with an annual company contribution to a Health Savings Account for employees on HDHP
  • Total well-being programs and incentives, including Employee Assistance Plan, well-being reimbursement, and backup family care services
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service