Platform Engineer [Data Bricks]

Engineers and Constructors International Inc.Houston, TX
2d

About The Position

We are seeking a Platform Engineer with deep expertise in Databricks administration, data governance, and platform‑level engineering standards. This role enables multiple analytics and AI teams to build safely, efficiently, and consistently on a shared Databricks platform by enforcing data quality, ingestion standards, security policies, and cost governance. You will be the technical owner of platform guardrails, operational stability, access patterns, and cost controls—ensuring the platform scales reliably across business teams.

Requirements

  • 5+ years in data engineering or platform engineering, with at least 2–3 years in Databricks administration.
  • Expert knowledge of Unity Catalog, cluster policies, Delta Lake, Spark, workspace configuration, and jobs.
  • Strong grounding in data governance, data modeling, ingestion frameworks, schema enforcement, versioning, and lineage.
  • Proven experience implementing RBAC and ABAC in Databricks or similar platforms.
  • Experience with cost optimization, monitoring, billing logs, and compute governance.
  • Strong Python/PySpark and SQL skills; familiarity with DLT, Airflow, or Databricks Workflows.
  • Strong communication skills with ability to set standards and influence teams diplomatically.

Nice To Haves

  • Experience in large-scale enterprise data platforms (Azure/AWS/GCP).
  • Familiarity with Trading & Supply or other high‑stakes analytical environments.
  • Experience creating dashboards for governance, cost, compliance, and pipeline health.
  • Experience with CI/CD, GitHub Actions, Azure DevOps, or similar tools.

Responsibilities

  • Administer Databricks workspaces, clusters, jobs, Unity Catalog, compute policies, environment configuration, and platform guardrails.
  • Implement and maintain RBAC and ABAC access controls for secure, compliant data access.
  • Define and enforce data ingestion standards, naming conventions, schema rules, Delta Lake design patterns, and data quality expectations.
  • Set platform‑wide standards for ingestion pipelines, Delta architecture, lineage, versioning, and validation.
  • Review and approve onboarded pipelines for compliance with platform requirements.
  • Partner with data engineering teams to uplift patterns and enforce consistency.
  • Manage workspace and catalog permissions, row/column‑level policies, attribute‑based filtering, and workspace isolation.
  • Collaborate with security teams to maintain compliance and enforce global data protection standards.
  • Implement cost thresholds, alerts, compute policies, and usage dashboards to prevent overspend.
  • Monitor job and cluster costs, detect anomalies, and recommend optimization actions.
  • Provide visibility into SKU‑level spend and workspace cost patterns.
  • Ensure platform reliability through automated testing, CI/CD templates, and code governance.
  • Build dashboards to track code compliance, data access, pipeline health, schema drift, and cost thresholds.
  • Resolve platform incidents and prevent recurrence by strengthening guardrails and configurations.
  • Define “handrails” for building on the platform: ingestion, Delta conventions, CI/CD, observability, and AI/ML patterns.
  • Coach data/analytics teams on compliant onboarding and optimal platform usage.
  • Maintain internal documentation, patterns, code templates, and guidance.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service