Data Engineer

Gore Mutual InsuranceToronto, ON
Hybrid

About The Position

As a Data Engineer at Gore Mutual Insurance, you have a strong technical background in software engineering / computer science, you will play a pivotal role in designing, building, and maintaining our data platform. Your work will facilitate data accessibility, accuracy, and accountability, enabling data-driven decision-making across our organization. You will collaborate closely with cross-functional teams to ensure our data processes are robust and scalable. At Gore Mutual, we’ve always set ourselves apart as a modern mutual that does good. Now, we’re proudly building on that legacy to transform our company—and our industry—for the better. Effective January 1, 2026, Gore has joined Beneva—the country’s largest mutual insurance company—as part of its Property & Casualty operations in Ontario and Western Canada. During 2026, Gore will combine its operations with Unica Insurance, Beneva’s Ontario-based subsidiary specializing in niche commercial and personal insurance, creating a stronger, more diversified mutual insurer with greater scale and long-term stability. Every decision and investment remains anchored in long-term benefits to customers, members, and communities. Come join us.

Requirements

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, Software Engineering or a related field.
  • A minimum of 4-5 years relevant experience as a data engineer is required. This includes experience in data engineering, data system development, or related roles.
  • Strong understanding of data structures, modern data modeling, and software architecture.
  • Good knowledge of Microsoft Azure Services (DevOps, Databricks, SQL Server, Event Hub, Web Apps, Data Factory, Azure Storage, Keyvault, etc.)
  • 2+ years of experience in ML engineering and MLOps, define and enforce MLOps best practices - including versioning, governance, and monitoring. Proven track record of deploying and maintaining ML models in production at scale.
  • Building automated and reusable pipelines for ML model training, evaluation, retraining, and deployment. Monitor ML model usage, latency, cost, and drift, and optimize model serving for low latency and high throughput
  • Experience with ML orchestration, develop pipelines for automate feature engineering, predictive model training and testing, prompt engineering, fine-tuning, and RAG workflows. Strong Python skills and familiarity with LLM frameworks (LangChain, LlamaIndex), Azure OpenAI models and Agentic AI, CI/CD, YAML pipeline, MLflo
  • Experience with software design patterns and test-driven development (TDD)
  • Proficiency in Python, including a strong grasp of Object Oriented and Functional programming paradigms.
  • Solid understanding of Spark concepts and distributed systems, including data transformations, RDDs, DataFrames, and Spark SQL.
  • Strong SQL skills and expertise in database management and performance tuning.
  • Experience with data lakehouse and Medallion architectures.
  • Strong problem-solving and critical-thinking abilities.
  • Strong communication and collaboration skills.
  • Experience with version control systems (Git) and CI/CD practices. Familiarity with data governance principles and metadata management practices.

Nice To Haves

  • Azure certifications (Microsoft Certified: Azure Data Engineer Associate).
  • Databricks certifications (Databricks Certified /Data Engineer Professional certification).
  • Experience with modern data transformation tools (dbt, Dataform, or similar) for building scalable analytics workflows.
  • Proficiency in dimensional modeling techniques such as star schema and snowflake schema.

Responsibilities

  • Design and implement robust data infrastructure, tooling, workflows, and models that power the data platform.
  • Build and maintain enterprise data assets to support business reporting and analytical modelling for all business stakeholders.
  • Ensure integration of required tools, monitoring health of data platform, and maintain CI/CD pipelines to enforce standards. Identify opportunities for code optimization and operational efficiencies.
  • Automate day-to-day data operations, pipeline monitoring, and data integration with external systems.
  • Ensure that security protocols follow best practices are in place to protect against potential security threats.
  • Create and maintain frameworks for metadata, data tagging, and data lineage.
  • Implement data quality and monitoring frameworks to ensure data reliability.
  • Optimize data pipelines to ensure efficient data flow.
  • Ensure that the data extracted from sources is accurate, complete, and usable. This might involve checking for missing values, inconsistent formats, or anomalies that could indicate errors.
  • Test the efficiency and speed of data pipelines and databases. This can help identify bottlenecks and optimize performance.
  • Verify that different components of the data infrastructure work together as expected. This includes checking that data flows correctly from sources to databases, and from databases to applications.

Benefits

  • extended health and dental benefits
  • disability insurance
  • retirement plan matching
  • paid time off
  • recognition and perk programs
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service