Engineer DataOps

EmpowerOverland Park, KS
1dRemote

About The Position

Our vision for the future is based on the idea that transforming financial lives starts by giving our people the freedom to transform their own. We have a flexible work environment, and fluid career paths. We not only encourage but celebrate internal mobility. We also recognize the importance of purpose, well-being, and work-life balance. Within Empower and our communities, we work hard to create a welcoming and inclusive environment, and our associates dedicate thousands of hours to volunteering for causes that matter most to them. Chart your own path and grow your career while helping more customers achieve financial freedom. Empower Yourself. Applicants must be authorized to work for any employer in the U.S. We are unable to sponsor or take over sponsorship of an employment visa at this time, including CPT/OPT. The DataOps Engineer will own the DataOps lifecycle for our Snowflake-on-AWS platform: from contract-first design and CI/CD, to observability, quality, release management, and incident response. You’ll turn data products into reliable services with SLAs/SLOs (freshness, accuracy, completeness, timeliness), automate promotion across environments, and hard-wire governance (PII tagging, masking, RBAC) so trusted data ships fast—and safely.

Requirements

  • Education: Bachelor’s in Computer Science, Information Systems, Data/Analytics, or related; equivalent practical experience welcomed.
  • Experience: 5–8+ years in data engineering/analytics platform roles with 3+ years operating Snowflake in production.
  • DataOps skills: You’ve shipped contract-first pipelines, automated tests, and environment promotion at scale; you measure success with SLIs/SLOs and error budgets.
  • Snowflake depth: Warehouses, Streams/Tasks, Snowpipe/Kafka Connector, search optimization, materialized views, replication/failover; strong SQL and performance tuning.
  • Automation: Terraform (Snowflake provider), dbt (models/tests/docs), GitHub/GitLab/Azure DevOps; Python/Bash for tooling and checks.
  • Observability: Building alerts/dashboards from ACCOUNT/ORG usage views; experience with data quality/observability platforms (e.g., GX/Soda/Monte Carlo/Bigeye) a plus.
  • Governance: Practical use of object TAGS, tag-based masking, row access policies, and evidence generation for audits.

Nice To Haves

  • Streaming/event pipelines (Kafka/Kinesis), CDC patterns, and backfill strategies.
  • Experience with OpenLineage/Marquez and catalog integrations (Collibra/Alation/Purview).
  • Prior FinOps or capacity-planning ownership for data platforms.
  • Familiarity with BI semantic layers and contract enforcement at consumption (Looker/Power BI/Tableau).

Responsibilities

  • Data product lifecycle & contracts Define and enforce data contracts (schemas, SLAs/SLOs, versioning, deprecation) for batch/streaming products; guard against breaking changes. Maintain a schema registry/contract repo and promotion workflow (dev → test → prod) with automated checks and approvals.
  • CI/CD & environment management Manage Snowflake objects and ELT as code (Terraform + dbt + Snowflake CLI); build Git-based pipelines with pre-merge tests (unit SQL, schema, data quality) and deterministic rollbacks. Standardize environment topologies, seed/test data, and release calendars to reduce lead time and change failure rate.
  • Orchestration & reliability Engineer idempotent pipelines using Streams/Tasks, Snowpipe/Kafka, and orchestration (Airflow/Dagster/Step Functions/Lambda). Publish runbooks and SLOs for datasets/jobs (freshness, latency, failure rate); run capacity planning and game days.
  • Data quality & observability Implement the data test pyramid: column/row checks, anomaly detection, reconciliation, and end-to-end validation. Build monitoring/alerts from ACCOUNT_USAGE/ORGANIZATION_USAGE and pipeline metadata (QUERY/LOAD/ACCESS history); wire alerts to on-call with clear ownership and auto-ticketing.
  • Governance-by-design (with DG & Security) Automate PII classification and object TAGS; enforce tag-based masking, row access policies, RBAC role families, and network policies. Ensure lineage and glossary links (Collibra/OpenLineage) are updated on every release; produce audit evidence on demand.
  • Incident & change management Lead data incident triage (bad/missing/late data), customer comms, RCAs, and post-incident hardening. Operate change control with impact analysis, blast-radius limits, and progressive delivery (canary/backfills).
  • FinOps & usage analytics Track queries, warehouse utilization, and job cost; implement guardrails (rightsizing, auto-suspend hygiene, query tagging/chargeback). Recommend workload placement (Snowflake vs. adjacent engines) balancing SLA, quality, and cost.

Benefits

  • Medical, dental, vision and life insurance
  • Retirement savings – 401(k) plan with generous company matching contributions (up to 6%), financial advisory services, potential company discretionary contribution, and a broad investment lineup
  • Tuition reimbursement up to $5,250/year
  • Business-casual environment that includes the option to wear jeans
  • Generous paid time off upon hire – including a paid time off program plus ten paid company holidays and three floating holidays each calendar year
  • Paid volunteer time — 16 hours per calendar year
  • Leave of absence programs – including paid parental leave, paid short- and long-term disability, and Family and Medical Leave (FMLA)
  • Business Resource Groups (BRGs) – BRGs facilitate inclusion and collaboration across our business internally and throughout the communities where we live, work and play. BRGs are open to all.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service