QA Lead - Capital Markets

Jay AnalytixMontreal, QC
Hybrid

About The Position

The Securitization/KME QA Lead is responsible for ensuring the quality, reliability, and accuracy of technology platforms supporting securitization operations. This role leads test strategy, automation initiatives, and quality assurance practices. The QA Lead partners closely with development, business, and operations teams to deliver robust solutions that meet regulatory and business requirements. The role also involves designing and executing comprehensive test strategies for AI systems and models, including prompt engineering, output evaluation, and bias/safety testing. This includes developing a deep understanding of LLM behavior, constructing effective prompts, recognizing hallucinations and off-target outputs, and applying evaluation metrics specific to generative AI. Additionally, the role involves testing AI systems integrated with RAG pipelines and knowledge bases, validating data quality and retrieval accuracy, understanding vector database mechanics, and leveraging frameworks like LangChain and LangGraph. The position requires validating integration points using MCPs, testing tool availability, and error handling.

Requirements

  • 7+ years of experience in quality assurance or quality engineering, with at least 3 years in a lead or senior capacity
  • Strong domain knowledge in securitization, or capital markets (or similar asset classes)
  • Hands-on experience with test automation tools such as Selenium, Robot framework, Playwright, or similar frameworks
  • Proficiency in programming languages like Java, Python and well versed with framework implementations
  • Hands-on experience in API Automations and backend system validations to ensure quality across different systems.
  • Proficiency in database for data validation, query development, and reconciliation testing
  • Experience with CI/CD pipelines and DevOps practices (Jenkins, GitHub, or or similar)
  • Demonstrated passion for simplifying and automating work, continuous learning, solving open-ended problems, and improving efficiency.
  • Excellent analytical, problem-solving, and communication skills
  • Ability to manage multiple priorities in a fast-paced, deadline-driven environment.

Nice To Haves

  • Leverage AI and ML tools to enhance test coverage, improving efficiency and reducing regression cycles.

Responsibilities

  • Define and execute comprehensive test strategies for securitization platforms, ensuring coverage across functional, regression, integration, and performance testing.
  • Design, build, and maintain automated test suites to accelerate release cycles and improve test coverage.
  • Validate end-to-end deal workflows including setup, structuring, processing, and distributions.
  • Ensure data integrity across upstream and downstream systems by developing and executing reconciliation tests and reports.
  • Coordinate regression testing for platform releases, patches, and infrastructure changes to ensure stability and backward compatibility.
  • Partner with developers, business analysts, and product owners to clarify requirements, identify edge cases, and ensure testability of new features.
  • Lead defect triage sessions, prioritize issues based on business impact, and track resolution through to closure while maintaining clear documentation.
  • Define and monitor key quality indicators such as defect density, test coverage, and automation rates; present findings to leadership and recommend improvements.
  • Guide and mentor junior QA team members, establish testing standards and best practices, and foster a culture of quality across the team.
  • Provide production support during critical processing windows, investigate production incidents, and coordinate root cause analysis and remediation efforts.
  • Design and execute comprehensive test strategies for AI systems and models, including prompt engineering, output evaluation, and bias/safety testing.
  • Develop deep understanding of LLM behavior—tokenization, embeddings, attention mechanisms, and inference—to anticipate failure modes.
  • Construct effective prompts, recognize hallucinations and off-target outputs, and assess quality across accuracy, tone, coherence, and bias dimensions.
  • Apply evaluation metrics specific to generative AI and establish appropriate thresholds.
  • Test AI systems integrated with RAG pipelines and knowledge bases, validating data quality and retrieval accuracy as they impact model outputs.
  • Understand vector database mechanics, similarity search thresholds, embedding drift, and test edge cases including near-duplicate documents, sparse vs. dense embeddings, and performance under scale.
  • Leverage LangChain and LangGraph frameworks to read code, understand chain and graph construction, identify failure points, and write test harnesses.
  • Validate integration points using MCPs, testing tool availability and error handling.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service