Data Operations Senior Lead

Workiva Inc.
Remote

About The Position

At Workiva, we are building the data foundation for the next generation of AI-enabled platform. Our Data Platform Ops team is at the center of this mission, merging software development with infrastructure operations to design and manage large-scale. We are looking for a Data Operations Lead to ensure the reliability, scalability, and operational excellence of our analytics and data systems. This role is operationally critical and technically demanding. As a senior technical leader, you will be responsible for keeping our data platforms available and performant, providing technical direction and guidance to an offshore contract team, and enabling our data engineering teams to move fast with confidence. You will be a key driver of technical strategy and execution without direct people management responsibilities. If you are energized by solving complex data problems at scale and want your work to have a direct impact every day, join us on this transformative journey.

Requirements

  • Bachelor's degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience.
  • 6+ years of experience in SRE, DevOps, or Systems Engineering roles, with a deep focus on data infrastructure, including experience leading technical projects or teams.
  • Proven ability to provide clear technical leadership, direction, and oversight to cross-functional or geographically distributed engineering teams (e.g., contract or offshore teams).
  • Deep understanding of data modeling, warehousing, and building scalable ETL/ELT pipelines.
  • Hands-on experience with Snowflake and DBT a must. Expert in Snowflake SQL.
  • Expert-level hands-on experience with Terraform, Splunk, and Airflow or similar technologies required.
  • High proficiency in Python and programming languages such as Java, Go, or Scala.
  • Management of ingestion frameworks like Fivetran, CData, Census and Openflow.
  • Experience with S3, Athena and Iceberg required.

Nice To Haves

  • Extensive experience with Kubernetes, Docker, and cloud-managed services (AWS preferred).
  • Familiarity with Unix/Linux system internals, networking, and distributed systems.
  • Experience in designing, analyzing, and operating large-scale distributed systems and massive data warehouses.
  • Strong background in cost optimization, usage governance, and maturing SRE practices within a data ecosystem.

Responsibilities

  • Provide expert technical guidance and direction to an offshore contract team to ensure high-quality execution of platform operations, automation, and reliability projects.
  • Act as the primary technical point of contact and decision-maker for the team.
  • Own the operational health and performance of the data platform.
  • Define and implement reliability goals (SLIs/SLOs) and establish sustainable mechanisms for scaling systems through automation.
  • Lead and drive incident response and management for platform-related production issues.
  • Conduct blameless post-mortems and drive systemic enhancements in reliability and efficiency based on findings.
  • Design and implement monitoring frameworks to govern service-oriented architecture (SOA) efficiently and intelligently.
  • Establish alerting for system health, query behavior, and performance bottlenecks.
  • Reduce operational toil by building self-service workflows, "guardrails," and infrastructure-as-code (IaC) solutions.
  • Automate administration tasks, environment provisioning, and usage monitoring, leveraging and mentoring the offshore team to execute.
  • Managing the "traffic control" of data tasks (using tools like Airflow, Dagster, or Prefect) to ensure complex dependencies are handled efficiently.
  • Ensuring the right people have access to the right data (RBAC - Role-Based Access Control) without slowing down the business.
  • Building automated "circuit breakers" that stop data from reaching the warehouse if it doesn't meet quality standards (e.g., missing values or incorrect formatting).
  • Partnering with Legal/Security teams to ensure data handling meets GDPR, CCPA, or industry-specific regulations (HIPAA, SOX).
  • Managing the cloud budget (Snowflake/BigQuery/Databricks costs) and predicting how much compute power the company will need as data volume grows.
  • Translating technical constraints into business impact. If a project is delayed, they explain why in terms of risk and data integrity rather than just "the code broke".
  • Consult with and provide expert operational support to data engineering, product teams, and stakeholders who utilize the Data Platform. This is critical to ensuring the performance and reliability of downstream data products.

Benefits

  • Salary range in the US: $151,000.00 - $242,000.00
  • A discretionary bonus typically paid annually
  • Restricted Stock Units granted at time of hire
  • 401(k) match and comprehensive employee benefits package
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service