Data Solutions Engineer, Systems Admin

Toronto ZooToronto, ON
Onsite

About The Position

The Systems Administrator, Data Solutions Engineer will play a key hands-on role in designing, building, maintaining and improving the technical foundations that transform data into reliable, secure, and reusable assets for the Toronto Zoo. This position is responsible for delivering and supporting end-to-end data that enable trusted reporting, analytics, and operational workflows across the Toronto Zoo. Working at the intersection of systems administration, applications development, and data engineering, the Data Solutions Engineer designs and implements reliable and progressive deliverables spanning system integrations, data pipelines, and internal applications/APIs. This position will assist and provide support in implementing the Strategic Plan.

Requirements

  • Post-secondary education in Information Technology, Computer Science, Information Systems, Software Engineering, or related discipline (or equivalent combination of education and demonstrated experience).
  • Minimum four years of progressive, hands-on experience in technical roles spanning systems/infrastructure and software development, with experience building, deploying, and maintaining data integrations/pipelines and the supporting services (APIs, jobs, automations) in an enterprise environment.
  • Demonstrated experience supporting and administering Microsoft server infrastructure/services and virtualization, Microsoft Azure and Microsoft 365 services.
  • Advanced experience with relational databases, data warehousing, and ETL/ELT processes.
  • Strong understanding of cybersecurity principles including access control, patch management, vulnerability remediation, and secure configuration.
  • Working knowledge of project management concepts and methodology.
  • Excellent stakeholder management: can translate ambiguous needs into technical plans, manage trade-offs, and communicate risks clearly to stakeholders at all levels.
  • Results-oriented self-starter with demonstrated ability to juggle multiple time-sensitive priorities, and adapt to and champion change.
  • Must be able to work weekends, shifts and holidays as scheduled.
  • Must be able to deal with staff and public in a courteous and efficient manner.

Nice To Haves

  • Formal training and/or certifications in relevant domains (e.g., Microsoft/Azure, cloud fundamentals, Linux/Windows administration, security, data engineering) are considered an asset.
  • Ongoing professional development in modern software engineering, data engineering and analytics practices (e.g., cloud services, data platforms, DevOps, data modeling) is expected.
  • Experience supporting on-premises/hybrid data lakehouse architecture and deployments, including storage, computer, ingestion patterns, and operational considerations such as performance, scalability, backup/recovery, and access controls, is highly desirable/an asset.

Responsibilities

  • Design, build, and maintain reliable data pipelines and integrations to collect, transform, and deliver data from internal systems and external sources for reporting, analytics, and operational use.
  • Develop and maintain robust ETL/ELT workflows, including data ingestion, transformation logic, scheduling/orchestration, and error handling.
  • Collaborate with business stakeholders, analysts, and technical teams to translate requirements into practical data products (datasets, curated views, metrics, and APIs) with clear definitions and expected outcomes.
  • Implement and maintain data quality practices, including validation rules, reconciliation checks, exception reporting, and root-cause analysis of data issues.
  • Produce and maintain technical documentation for data flows, schemas, interfaces, and operational procedures.
  • Design and develop internal tools, web applications, and services that streamline data capture, access, and operational workflows across departments.
  • Build and maintain secure APIs and services to expose data and functionality to authorized users and systems, including authentication and role-based access controls where required.
  • Implement and maintain automated workflows (e.g., scheduled jobs, event-driven processes, and system-to-system automations) to reduce manual effort and improve data consistency.
  • Apply software engineering best practices, including code review, unit/integration testing, version control, and CI/CD (Continuous Integration/Continuous Deployment) pipelines to support maintainable and reliable delivery.
  • Support user adoption by working with stakeholders to validate solutions, provide technical guidance, and improve usability through iterative enhancements.
  • Administer and optimize Microsoft Azure and SaaS platforms, including compute, storage, networking, identity, security controls, monitoring/automation, and cost governance (resource standards, access policies, security baselines).
  • Install, configure, and support physical, virtualized, and cloud server environments (VMware/Hyper-V/Azure), including Windows Server services (Active Directory, DNS, DHCP, Group Policy, authentication) and enterprise storage (SAN/NAS).
  • Ensure service availability, performance, capacity, and resilience through proactive monitoring, tuning, backup/recovery practices, and operational runbooks.
  • Serve as an escalation resource for complex incidents and outages; lead triage and root-cause analysis, implement corrective actions, and support incident response and post-incident reporting.
  • Implement and maintain infrastructure security and compliance controls (patching, hardening, privileged access, firewall/network posture, endpoint protection, vulnerability management) and coordinate audits/reviews through remediation.
  • Maintain accurate infrastructure documentation and operational standards (configurations, procedures, diagrams) and support planned changes and upgrades through established change practices.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service