Senior Engineer, Data Platform

Movable InkToronto, ON
$145,000 - $185,000

About The Position

Movable Ink scales content personalization for marketers through data-activated content generation and AI decisioning. The world’s most innovative brands rely on Movable Ink to maximize revenue, simplify workflow and boost marketing agility. Headquartered in New York City with close to 600 employees, Movable Ink serves its global client base with operations throughout North America, Central America, Europe, Australia, and Japan. As a Senior Engineer on the Data Platform team at Movable Ink, you will help design and build the systems that power how data flows through the organization. You will play a key role in developing and operating the unified data platform responsible for ingesting, processing, and exposing large volumes of data that drive Movable Ink’s products. Working closely with teammates across engineering, analytics, and infrastructure, you will build scalable ingestion pipelines and backend services that integrate data from a variety of sources while ensuring reliability, governance, and high availability across the platform. You will help evolve legacy pipelines toward modern data architectures, and help drive the unification of our products and services in how they access data. Your work will directly impact the growth and maturity of Movable Ink’s products and services.

Requirements

  • Deep understanding of data governance principles and data lifecycle management, including data quality, lineage, retention, and access control
  • Strong familiarity with database backend technologies (OLAP vs. OLTP) and when to apply each, with hands-on experience using data warehouse technologies (Databricks, Clickhouse, Redshift, etc) at production scale
  • Strong familiarity with SQL with the ability to write and optimize queries
  • Skill at building complex systems and identifying core primitives and how to apply them to meet changing business needs and future roadmap requirements.
  • Experience designing and operating data systems on a major cloud provider (AWS or GCP) and working in a multi-cloud deployment environment
  • Proficiency with containerization and orchestration technologies, including Docker and Kubernetes
  • Proven ability to integrate systems, connecting legacy platforms with modern architectures through well-designed interfaces and migration strategies
  • Effective communicator who can facilitate system design discussions, document architectural decisions, and work across team boundaries
  • Experience designing and operating data systems on AWS; GCP experience is a plus
  • Strong collaboration and communication skills, comfortable leading design discussions, writing technical specs, and working across team boundaries
  • Experience with Infrastructure-as-Code (IaC) tools such as Terraform, CloudFormation, or similar

Nice To Haves

  • Familiarity with Elixir for building concurrent, fault-tolerant data services
  • Experience with data pipeline and streaming technologies such as Airflow, Kafka, Apache Flink, or similar
  • Hands-on experience with columnar/OLAP databases such as Databricks or ClickHouse

Responsibilities

  • Support the design and delivery of data ingestion pipeline and infrastructure
  • Assist in the successful migration of legacy data lifecycle management to new platform without disrupting existing data consumers
  • Establish and maintain SLIs and SLOs for new ingestion and data platform with dashboards and alerting to track performance
  • Build flexible data storage layer supporting a variety of use cases, e.g., transactional, analytic, and machine learning workloads
  • Implement comprehensive monitoring, observability, and incident response practices for all event data pipelines and services
  • Collaborate with product engineering, analytics, and machine learning teams to define contracts, functional requirements, and standards
  • Design and implement a semantic metadata layer that classifies and labels data assets across both products, enabling consistent data discovery, exposure policies, and identification of cross-product data reuse opportunities
  • Architect and deliver a multi-tenant data model that supports secure data sharing and isolation across clients, with controls designed to meet regulatory and government compliance requirements

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

251-500 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service