Staff Data Engineer

SyndioCalgary, AB
$180,000 - $195,000Remote

About The Position

We’re looking for a Staff Data Engineer to define the architecture and long-term evolution of Syndio’s data platform as we scale into AI-driven products. This role owns the technical direction for how data is structured, standardized, modeled, and activated across the company, with a particular focus on enabling LLM-powered systems and decisioning workflows. You will design, architect, and build data pipelines that produce the data products powering analytics, product experiences, and Syndio's LLM-powered systems. Your work will shape how data is used across the company and enable the development of Syndio's decision graph and AI capabilities. This includes solving complex challenges such as multi-tenant schema standardization, automated field mapping, retrieval-ready data modeling, and data pipelines for AI evaluation and feedback loops. You will operate across pipeline architecture, platform design, and cross-functional data strategy, partnering closely with product, data science, and engineering leadership.

Requirements

  • Deep experience in data engineering, data platform, or distributed systems roles
  • Proven track record designing and scaling data pipelines across teams or organizations
  • Experience solving ambiguous, high-impact platform problems and building systems widely adopted across teams
  • Strong ability to operate at both architectural and hands-on levels
  • Cross-functional leadership and ability to influence technical direction
  • Data modeling and schema design
  • Pipeline architecture and orchestration
  • Data warehouse and storage systems
  • Strong proficiency in Python and SQL
  • Experience with data patterns for AI/LLM systems: retrieval-ready data modeling, vector storage, embedding pipelines, and evaluation datasets
  • Cloud platforms: GCP (preferred), AWS, or Azure
  • Cloud data warehouses: BigQuery, Snowflake, Redshift, or similar
  • Data transformation: dbt or equivalent modern transformation frameworks
  • Relational databases: Postgres, MySQL, or similar
  • Streaming and CDC ingestion patterns (Datastream, Debezium, Kafka, or similar)
  • Data governance and lineage tooling

Nice To Haves

  • Experience building AI-native data platforms or developer tooling
  • Experience with event-driven or streaming architectures
  • Background in high-scale systems or large multi-tenant SaaS platforms
  • Familiarity with compensation or HR data domains

Responsibilities

  • Define the data architecture behind AI for Pay Decisions and future decisioning systems
  • Build the data foundation that powers LLM-based features, including the pipelines that feed retrieval, context generation, and evaluation workflows
  • Own high-leverage problems like automated schema mapping and multi-tenant data standardization
  • Establish standards for data contracts, modeling, and pipeline architecture across teams
  • Design data governance into the platform through policy tagging, PII handling, and lineage tracking
  • Build pipelines that support both batch analytics and the event-driven data flows behind AI workflows
  • Shape how data is structured, delivered, and made reliable for stakeholders across Syndio, from analytics and product teams to the teams training and evaluating AI systems

Benefits

  • Competitive Compensation
  • Syndio Equity
  • 20 days annually
  • Paid sick & safe time
  • Compassion leave
  • Voting leave
  • Pension Contribution
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service