MLOps / Data Engineer

Triton Digital Canada Inc.
Remote

About The Position

Modern advertising marketers allocate spending through automated systems that interpret signals. For a channel to capture its fair share of budget, its inventory must be legible to those systems — standardized signals, structured metadata, and machine-readable supply pathways. For the next evolution of media buying, Audience Signal is even more consequential. Agentic buying — autonomous systems that independently interpret objectives, evaluate options, negotiate terms, and execute campaigns — is moving from concept to production. These systems don’t browse inventory the way a human planner does. They query structured environments, evaluate supply through machine-readable signals, and pass over inventory they cannot read. Our Mission Triton Digital builds the infrastructure layer that makes audio inventory legible to modern — and next-generation — advertising markets. Our platform enables broadcasters, independent podcasters, and streaming music services to participate in automated buying on equal terms with the major platforms, aggregating over 100 billion audio impressions per month across podcast, streaming, and broadcast radio inventory. The listener data team is at the heart of that mission. We enrich the listener profile to enable better advertising targeting through services including integration with Data Management Platforms (DMPs), the Profiler, the GeoIP service, and any other systems that serve the goal of making listener audiences continuously discoverable and actionable for buyers. The Role As our MLOps Data Engineer, you’ll be the bridge between data science and production systems — ensuring that models don’t just work in notebooks but thrive in real-world environments. You’ll design and automate CI/CD pipelines, optimize large-scale data processing with Apache Spark, and leverage Databricks to deliver machine learning solutions that are reliable, scalable, and fast. Your work will directly determine how quickly we can turn listener intelligence into structured, queryable signals that advertising systems — today’s DSPs and tomorrow’s agentic buyers — can act on.

Requirements

  • Proven experience in Data Engineering, MLOps, and DevOps roles with a focus on automation and scalability.
  • Strong programming skills in Python, with hands-on experience in Apache Spark. Scala is a huge plus.
  • Advanced expertise in Databricks, including Delta Lake, structured streaming, feature engineering
  • Solid understanding of CI/CD principles and tools (e.g., GitHub Actions, Jenkins, Azure DevOps, GitLab CI, ArgoCD).
  • Familiarity with cloud platforms (AWS, Azure, or GCP) for data and ML workloads.
  • A problem-solving mindset and the ability to work closely with cross-functional teams.
  • Strong architectural mindset, capable of evaluating trade-offs across cost, performance, scalability, and maintainability when selecting tools and designing systems.
  • Experience working with containerized and orchestrated environments (Kubernetes / OpenShift), including deployment, scaling, and fault tolerance of data and ML workloads.
  • Advanced English required. French is an asset.

Nice To Haves

  • Familiarity with IAB data standards, programmatic advertising infrastructure, or AdTech data pipelines is a strong asset.

Responsibilities

  • Design, implement, and maintain CI/CD pipelines for machine learning workflows using tools like GitHub Actions, Azure DevOps, or Jenkins.
  • Build and optimize data processing pipelines in Apache Spark (PySpark and Scala) for large-scale, distributed listener datasets.
  • Deploy and manage Databricks environments, ensuring efficient cluster usage, job scheduling, and cost optimization.
  • Collaborate with data scientists to productionize ML models, integrating them into scalable APIs or batch processing systems that feed real-time, machine-readable audience signals.
  • Implement automated testing, monitoring, and alerting for ML pipelines to ensure the reliability and reproducibility that certified buyers require.
  • Champion best practices in version control, model registry management, and environment reproducibility.
  • Help evolve our listener data infrastructure toward agent-compatible supply — live, structured, queryable data feeds that autonomous buying systems can discover and act on without human mediation.

Benefits

  • Fully remote position (must be based in ONTARIO or QUEBEC)
  • 4 weeks of vacation + 5 paid personal days annually
  • Group insurance programs as of your first day, including access to telemedicine and an EAP
  • Collective RRSP with matching contribution
  • Internet reimbursement and more

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

101-250 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service