Director of Data Engineering

Human Agency
91d$120,000 - $200,000

About The Position

Join Human Agency as a Data Engineering Consultant to stabilize and grow modern data platforms and lead AI-enabled outcomes across multiple client engagements. You’ll rotate across projects—as needed, you may temporarily coordinate a client’s data function to ensure continuity and accelerate delivery, then shift into hands-on engineering, reliability work, or AI implementation. This is a role that can evolve and be data one day and be an AI implementation leader the next. You’ll pair deep engineering craft with clear executive communication and consulting polish.

Requirements

  • 7+ years in data engineering/analytics engineering with ownership of production pipelines and BI at scale.
  • Demonstrated success owning and stabilizing production data platforms and critical pipelines.
  • Strong grasp of modern data platforms (e.g., Snowflake), orchestration (Airflow), and transformation frameworks (dbt or equivalent).
  • Competence with data integration (ELT/ETL), APIs, cloud storage, and SQL performance tuning.
  • Practical data reliability experience: observability, lineage, testing, and change management.
  • Operates effectively in ambiguous, partially documented environments; creates order quickly through documentation and standards.
  • Prior ownership of core operations and reliability for business-critical pipelines with defined SLOs and incident response.
  • Demonstrated client-facing experience (consulting/agency or internal platform teams with cross-functional stakeholders) and outstanding written/verbal communication (executive briefings, workshops, decision memos).

Nice To Haves

  • Deep interest in Generative AI and Machine Learning.
  • Basic scripting ability in Python.
  • Practical Generative AI experience: shipped at least one end-to-end workflow (e.g., RAG) including ingestion, embeddings, retrieval, generation, and evaluation.
  • Working knowledge of LLM behavior (tokens, context windows, temperature/top-p, few-shot/tool use) and how to tune for quality/cost/latency.
  • Comfort with vector search (e.g., pgvector or a hosted vector store) and hybrid retrieval patterns.
  • Evaluation & safety basics: offline evaluation harnesses, lightweight online A/B tests, and guardrails for PII and prompt-injection.
  • MLOps for LLMs: experiment tracking, versioning of prompts/configs, CI/CD for data & retrieval graphs, and production monitoring (latency, cost, drift).
  • Python scripting for data/LLM utilities and service integration (APIs, batching, retries).
  • Familiarity with BI tools (Power BI/Tableau) and semantic layer design.
  • Exposure to streaming, reverse ETL, and basic MDM/reference data management.
  • Security & governance awareness (role‑based access, least privilege, data retention).

Responsibilities

  • Engagement Leadership: Coordinate across data engineering, analytics, and data science leads; run operating cadences, triage priorities, and manage releases.
  • Map ownership and dependencies; reduce single points of failure; maintain a living service catalog and decision log.
  • Lead transition planning and knowledge transfer with internal teams and vendors while sustaining delivery.
  • Platform & Pipeline Ownership: Build, operate, and improve ELT/ETL pipelines across batch and streaming sources.
  • Manage orchestration (e.g., Airflow), transformations, environments, and CI/CD for analytics code.
  • Optimize warehouse performance (e.g., Snowflake) and cost.
  • Rapid discovery of existing pipelines and data contracts; map dependencies, SLAs/SLOs, and single points of failure; propose immediate stabilizations.
  • Data Reliability & Governance: Implement monitoring/alerting, data quality checks, and tests with clear SLOs.
  • Maintain lineage/metadata visibility and role-based access controls.
  • Participate in an incident response rotation; maintain runbooks and postmortems.
  • Establish change-management controls (versioning, approvals, environment promotion) for analytics code.
  • Analytics Enablement: Partner with analysts and business stakeholders to deliver trusted datasets and semantic models.
  • Support BI tools (Looker/Power BI/Tableau) and establish versioned, documented sources of truth.
  • Client Collaboration & Consulting: Translate business needs into technical data solutions and clear option sets (impact, risk, effort).
  • Facilitate discovery/working sessions; align requirements and prioritize tradeoffs.
  • Prepare executive-ready updates: concise narratives, metrics, and decision logs.
  • Manage scope and expectations; escalate risks early; build trust and influence across engineering, analytics, and business teams.
  • Documentation & Communication: Produce concise technical docs, decision logs, and release notes.
  • Translate technical tradeoffs into clear options for non-technical stakeholders.
  • Own day‑to‑day reliability for priority pipelines and critical dashboards; implement pragmatic monitoring/alerting.
  • Triage/resolve incidents; create or harden runbooks, playbooks, and on‑call rotations.
  • Establish lightweight governance: data quality checks, lineage visibility, access reviews, and change‑management basics.
  • Be able to read, debug, and improve existing pipelines; create new connectors/transformations as needed.
  • Standardize patterns (e.g., ELT with versioned transformations, environment promotion, CI/CD for analytics code).
  • Recommend and implement pragmatic tooling upgrades without destabilizing production.
  • Maintain a living service catalog and decision log.
  • Lead structured knowledge transfer sessions and create handover materials.

Benefits

  • Base Salary: $120,000–$200,000/yr + performance-based incentives; final compensation commensurate with experience and location.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service