Business Insights Engineer

Unilode Aviation

About The Position

The role of the Business Insights Engineer is to support the development of a scalable, governed, and selfservice analytics capability across Unilode by designing and maintaining trusted data models, semantic layers, and analytical datasets that enable reporting, automation, advanced analytics, and the generation of business insights. The role focuses on translating business processes into structured and optimised data solutions, improving data accessibility and adoption, and supporting the ongoing evolution of Unilode’s Enterprise Data Warehouse (EDW) and analytics engineering practices. The Business Insights Engineer works closely with subject matter experts, analysts, engineers, and stakeholders to ensure data products are reliable, maintainable, and aligned with business needs.

Requirements

  • Hands-on experience in data engineering, analytics engineering, or related fields.
  • Experience working with Databricks, including Jobs/Workflows, repositories, cluster policies, and Unity Catalog.
  • Strong proficiency in PySpark and SQL with experience building scalable ETL/ELT pipelines.
  • Solid understanding of Delta Lake architecture including Bronze, Silver, and Gold design patterns.
  • Experience working with AWS services including S3, IAM, and secure data architectures.
  • Understanding of dimensional modelling and star schema design.
  • Experience implementing CI/CD for data pipelines using Git-based workflows.
  • Strong analytical and problem-solving skills.
  • Ability to work effectively across engineering, analytics, product, and business teams.
  • Strong communication skills with the ability to support, mentor, and guide stakeholders.
  • Experience supporting semantic layer development or self-service analytics environments.
  • Exposure to machine learning pipelines or advanced analytics environments.
  • Experience with BI and visualisation platforms.
  • Familiarity with data governance and metadata management practices.

Responsibilities

  • Design and maintain logical and physical data models aligned to business processes and analytical requirements.
  • Translate operational and business requirements into scalable and governed data structures.
  • Apply industry-recognised modelling approaches such as Kimball or Inmon methodologies.
  • Conduct data profiling and source system analysis to support design decisions.
  • Identify opportunities to improve data model performance, usability, and maintainability.
  • Balance business value with long-term cost of ownership and scalability considerations.
  • Build and maintain curated datasets and semantic layers to support dashboards, reporting, ML models, and data products.
  • Treat data as a product to improve discoverability, usability, and adoption across the organisation.
  • Develop automated reporting, dashboards, and alerting solutions using BI and visualisation tools.
  • Support scalable analytics by enabling self-service access to governed datasets.
  • Ensure data products are reliable, optimised, and aligned with business expectations.
  • Build and maintain scalable ETL/ELT pipelines using PySpark, SQL, and Databricks.
  • Support automation of legacy reporting and manual data processes.
  • Contribute to improving the speed, reliability, and efficiency of data pipelines and workflows.
  • Apply CI/CD principles, version control, and release management practices to data engineering activities.
  • Ensure solutions are maintainable, testable, and aligned to engineering standards.
  • Work closely with analysts, business users, product teams, and technical stakeholders to understand requirements and priorities.
  • Communicate technical concepts clearly to non-technical audiences.
  • Support analysts and data consumers in understanding and using available datasets effectively.
  • Provide guidance and mentorship to analytics teams on data modelling and engineering best practices.
  • Encourage adoption of self-service analytics capabilities across the organisation.
  • Support implementation of data governance standards, policies, and controls.
  • Create and maintain clear documentation for data models, workflows, and use cases.
  • Perform unit, integration, and efficiency testing to validate data quality and credibility.
  • Support initiatives that improve trust, consistency, and transparency in enterprise data.
  • Identify and escalate data quality issues, inconsistencies, or risks.
  • Identify opportunities to improve data quality, efficiency, and analytics engineering practices.
  • Investigate issues in systems, processes, and services and support the implementation of preventative measures.
  • Continually refine data models in response to user feedback and organisational change.
  • Keep up to date with emerging technologies, tools, and engineering techniques.
  • Explore and leverage AI capabilities to improve coding practices, modelling approaches, and insight generation.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service