About The Position

This position is within Acadia. Acadia is wholly owned by LSEG (London Stock Exchange Group) and is part of its Post Trade Solutions division. We’re looking for a full stack lead QA engineer to design and scale modern automation platforms that support high-quality, high-velocity software delivery. This role combines automation engineering, performance engineering, CI/CD pipeline design, and cloud-aware testing practices, offering the opportunity to influence how enterprise systems are built, tested, and released. You will collaborate across engineering, DevOps, and platform teams to integrate testing deeply into development workflows, improve observability, scalability and reliability, and ensure quality at scale across distributed systems.

Requirements

  • Strong analytical and problem‑solving abilities with a deep understanding of distributed systems and SaaS architectures.
  • Ability to break down complex systems and define effective test strategies across UI, API, data, and backend layers.
  • Proficiency in debugging, log analysis, and employing observability tools to trace issues across distributed environments.
  • Excellent interpersonal skills with the ability to articulate quality risks, metrics, and trade‑offs to both technical and non‑technical audiences.
  • Leadership capability to mentor QA engineers, influence quality culture, and champion modern testing practices across teams.
  • Strong collaboration skills with experience working closely with engineering, DevOps, SRE, product management, and platform teams in agile, fast‑paced environments.
  • Familiarity with continuous testing, Shift‑Left/Shift‑Right quality principles, and modern DevTestOps approaches.
  • Ability to drive consistency, alignment, and knowledge‑sharing across globally distributed engineering teams.
  • 10+ years of hands‑on experience in functional UI testing, API testing (REST/GraphQL), and performance testing for cloud‑based or SaaS applications.
  • Proven experience designing and implementing automated test frameworks for cloud‑native, containerized, and microservices‑based architectures (Docker, Kubernetes, AWS services).
  • Strong background in building scalable automated test suites using tools such as Playwright, Cypress, Postman, and K6.
  • Experience with contract testing methodologies and frameworks (e.g., Pact, OpenAPI driven testing).
  • Familiarity with foundational application security concepts and OWASP Top 10 risks.
  • Demonstrated expertise in CI/CD pipeline engineering and integrating automated testing into DevOps workflows.
  • Proficiency with Git‑based version control platforms (GitLab, GitHub, Bitbucket).
  • Solid programming skills in Java, JavaScript, and Python.
  • Experience using observability and monitoring platforms such as ELK, Splunk, and Datadog for debugging, issue triage, and quality insights.
  • Hands‑on experience with performance engineering and chaos testing concepts.
  • Knowledge of accessibility standards and experience validating accessible web experiences.
  • Exposure to, or active experimentation with, AI‑assisted testing tools and techniques.
  • Cloud experience preferred, especially within AWS ecosystems.
  • Degree in Computer Science, Engineering, Mathematics, or equivalent practical experience.

Responsibilities

  • Lead the end‑to‑end delivery of quality engineering solutions that align with technology guardrails and support strategic product and platform roadmaps.
  • Design and implement testing strategies - functional, API, performance, and reliability, across distributed SaaS platforms to ensure scalable, secure, and high‑quality releases.
  • Validate multi‑tenant SaaS architectures, including tenancy isolation, environment consistency, configuration drift detection, and horizontal scalability.
  • Develop and maintain a robust test data and test environment management strategy that enables automated, repeatable, and production‑like test execution.
  • Collaborate with Site Reliability Engineering (SRE) teams to embed observability metrics, logs, and traces into automated tests and quality gates, improving incident readiness and MTTR.
  • Incorporate production telemetry and real‑world usage insights into shift‑right testing practices to enhance resilience and defect detection.
  • Lead experimentation with GenAI/LLM‑based capabilities, including self‑healing tests, AI‑generated test cases, and automated defect triage, and transition validated solutions into the testing ecosystem.
  • Drive adoption of modern quality practices (Shift‑Left, DevTestOps, contract testing, performance engineering, and data validation) across product and engineering teams.
  • Translate business and technical requirements into scalable quality strategies and provide clear quality‑related guidance to multi-functional partners.
  • Partner with global engineering and platform teams to ensure consistent application of testing standards and tools across agile delivery pipelines.
  • Participate in incident management, root‑cause analysis, and post‑mortem reviews, recommending durable preventative measures aligned to product and operational goals.
  • Leverage design‑thinking principles, DevTestOps pipelines, and infrastructure‑as‑code to improve delivery speed, test automation coverage, and overall system quality.
  • Stay ahead of advancements in GenAI, cloud infrastructure, microservices, automation tooling, and observability to continually evolve testing practices.
  • Produce and maintain comprehensive documentation covering test architectures, standards, strategies, and operational procedures for knowledge sharing and continuity.

Benefits

  • Annual Wellness Allowance
  • Paid time-off
  • Medical
  • Dental
  • Vision
  • Flex Spending & Health Savings Options
  • Prescription Drug plan
  • 401(K) Savings Plan and Company match
  • basic life insurance
  • disability benefits
  • emergency backup dependent care
  • adoption assistance
  • commuter assistance
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service