About The Position

At Hive, we create moments that matter by helping event marketers connect with their biggest fans. Our platform powers marketing for over 1,500 iconic events, festivals, venues, and promoters across North America, enabling them to grow their customer base and sell out shows using intelligent, automated, and personalized digital marketing tools. Hive integrates with more than 25 platforms, including Ticketmaster, to provide rich, real-time customer data, allowing event marketers to engage their audiences with precision and impact. The R&D Data team at Hive is responsible for storing and querying production data at scale, building systems that power Hive’s products and ensure data is accessible, reliable, and performant, rather than focusing solely on BI or dashboards. As a Senior Data Engineer, you will be crucial in evolving Hive's data platform, which directly influences customer capabilities, product speed, and leadership decision-making confidence. You will be responsible for outcomes, not just tasks, and will care about any business metric related to data. Hive.co is a marketing platform for event marketers, helping brands personalize and automate email and SMS campaigns to sell out events. By integrating with ticketing partners like Ticketmaster and e-commerce partners like Shopify, Hive enables brands to access and act on all their customer data, segmenting lists in numerous ways to send customized, timely email campaigns. Founded in 2014 at the University of Waterloo and a Y Combinator alumnus, Hive's team is now 100% remote across Canada, striving to provide an online work environment that balances work-life and team connection.

Requirements

  • 8+ years of hands-on data engineering experience, with a proven track record of designing, building, and operating large-scale distributed data systems in production (high-throughput event streams, real SLAs, and real consequences when things fail).
  • Strong foundations in distributed systems principles, including partitioning strategies, consistency models, backpressure handling, fault tolerance, and capacity planning at 10x the volume designed for.
  • End-to-end ML engineering experience: feature engineering and feature store design, training pipeline orchestration, model deployment and serving infrastructure, and production monitoring including drift detection and retraining triggers.
  • Experience applying LLMs and agentic systems in production data or ML contexts, whether enriching pipelines, automating classification, or building autonomous workflow components.
  • A product and commercial orientation, consistently framing technical decisions in terms of customer impact and business outcomes, and possessing stakeholder communication skills to convey this to non-technical audiences.
  • Comfortable operating independently and making progress in ambiguous, fast-changing environments.
  • Biased toward action, willing to make decisions with imperfect information and iterate quickly, communicating with other teams inside product and engineering.
  • Skilled at troubleshooting complex systems and building durable solutions when things break.
  • Excited to shape the future of Hive’s data infrastructure and team in a high-growth, fast-paced company.

Nice To Haves

  • History of owning or re-architecting a data platform end-to-end in a fast-growing environment.
  • Background in SaaS or event-driven products where data systems directly power user-facing features.

Responsibilities

  • Design and own a cloud-native big data platform handling audience data for millions of attendees and billions of interactions a year, building infrastructure that determines the quality of every insight, recommendation, and decision Hive's customers make.
  • Design and own the infrastructure that takes models from experiment to production, including feature stores, training pipelines, model serving, and monitoring, switching between data engineering and ML engineering to ensure reliable, low-latency access to features and infrastructure.
  • Own the full pipeline from Change Data Capture through validation, transformation, and denormalization, understanding the business impact of pipeline delays, metric drifts, or stale data.
  • Treat data as a product, defining SLAs, obsessing over data health, and building for discoverability, shipping data products that internal teams and customers depend on like a production API.
  • Bring an agentic engineering mindset to work and builds, using AI coding agents (e.g. Claude Code) as a force multiplier, and building LLM-powered pipelines and autonomous agents that enrich, classify, and act on audience data at scale.

Benefits

  • Meaningful salary and equity: you're rewarded based on impact.
  • The compensation range for this role is $123,600 to $187,900 CAD per year, depending on qualifications and experience.
  • New team members typically start between $123,600- $155,000 based on experience and alignment with the expectations outlined in this posting.
  • Work fully remote from the comfort of your home.
  • Flexible work hours: minimal meetings and no 9-5
  • Health & Dental coverage with Parental Leave top-ups in addition to EI benefits
  • Unlimited vacation/PTO: so you can be happy and healthy!
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service