About The Position

Elevate your career journey by embracing a new challenge with Kinaxis. We are experts in tech, but it’s really our people who give us passion to always seek ways to do things better. As such, we’re serious about your career growth and professional development, because People matter at Kinaxis. In 1984, we started out as a team of three engineers. Today, we have grown to become a global organization with over 2000 employees around the world, with a brand-new HQ based in Kanata North in Ottawa. As one of Canada’s Top Employers, we are proud to work with our customers and employees towards solving some of the biggest challenges facing supply chains today. At Kinaxis, we power the world’s supply chains to help preserve the planet’s resources and enrich the human experience. As a global leader in end-to-end supply chain management, we enable supply chain excellence for all industries, with more than 40,000 users in over 100 countries. We are expanding our team as we continue to innovate and revolutionize how we support our customers. Our customers have the largest supply chains imaginable. The scale, complexity, and volume of data driving those supply chains continues to grow. We build tools that make bringing that data into our platform, and managing it, possible. We’re continuously improving to keep pace with an ever- evolving modern data landscape. We’re looking for an accomplished software developer and architect with deep expertise in building data management and system‑integration tools, particularly leveraging the Databricks ecosystem. You have a strong understanding of the full data domain from single‑event processing to the batch workloads that power AI/ML solutions, and their supporting technologies. You’re well‑versed in data quality, governance, and API lifecycle management, and you bring the technical leadership needed to guide diverse projects in a collaborative, fast‑moving environment.

Requirements

  • Post-secondary degree in Computer Science, Engineering, or equivalent related discipline.
  • 10+ years designing and delivering complex platform and data platform systems at enterprise scale.
  • Demonstrated architectural leadership across both application and data platforms.
  • Exceptional communicator capable of explaining technical decisions to engineers, PMs, executives, and cross‑functional stakeholders.
  • Proven ability to design scalable, secure, and maintainable distributed architectures, balancing tradeoffs across performance, reliability, cost, and developer experience.
  • Strong platform engineering background: Kubernetes, containerization, infrastructure automation, service design, API standards, and platform reliability practices.
  • Hands‑on experience with CI/CD, testing strategies, release automation, and platform‑level observability (logging, metrics, tracing) for both application and data workloads.
  • Proficiency with Databricks (Spark, Delta Lake, Unity Catalog) accompanied by the ability to align Databricks patterns with broader platform architectural decisions.
  • Experience with Azure and GCP cloud services, including storage (ADLS, GCS), compute (AKS, GKE), and messaging/streaming (Event Hub, Pub/Sub).
  • Demonstrated history of mentorship, architectural leadership, and raising engineering maturity across multiple teams.
  • A strong ability to rapidly learn and leverage modern technologies, AI included, to solve complex software problems and improve productivity.
  • Finds opportunities to accelerate the SDLC through innovative application of AI or other tooling, while upholding architecture consistency, secure design, and code-quality standards.
  • Reviews AI-generated code rigorously for correctness, architectural fit, integration risk, and edge case support with a growth mindset and bias for experimentation.

Nice To Haves

  • Familiarity with distributed datastores and storage engines where performance or I/O characteristics strongly shape architectural decisions.
  • Knowledge of API management patterns (Azure API Management, Apigee), and designing robust data and platform APIs.
  • Experience with data orchestration / transformation tools such as Airflow, dbt, Apache Hop, or Pentaho.
  • Architectural contributions to enterprise‑scale data governance, access control frameworks, lineage systems, and compliance‑driven data design.
  • Domain experience with supply chain data or similarly complex operational datasets.

Responsibilities

  • Lead the evolution of our Data Fabric.
  • Tackle complex and ambiguous technical challenges, rapidly learning and driving solutions.
  • Research and prototype exciting new platform API and data management capabilities
  • Act as a steward of the codebase, promoting best practices for quality, security, performance, and maintainability.
  • Collaborate across teams to solve cross‑component integration challenges and advance shared platform capabilities.
  • Stay current on emerging technologies, and ways of working, assess opportunities to increase velocity, and champion adoption.
  • Proactively mentor and support others while maintaining professionalism andapproachability.
  • Actively contribute to planning, estimation, and team ceremonies.
  • Communicate technical concepts to diverse audiences, and be a presenter at demos, knowledge transfers, executive reviews, and more.
  • Participate in customer meetings to understand needs and represent engineering.
  • Help shape long‑term technical strategy and guide the roadmap toward it.

Benefits

  • Flexible vacation and Kinaxis Days (company-wide day off on the last Friday of every month)
  • Flexible work options
  • Physical and mental well-being programs
  • Regularly scheduled virtual fitness classes
  • Mentorship programs and training and career development
  • Recognition programs and referral rewards
  • Hackathons
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service