AI Quality Assurance Analyst

Allied Benefit SystemsChicago, IL
32d$110,000 - $125,000Remote

About The Position

The AI Quality Assurance Analyst is the quality champion for Allied’s emerging AI capabilities, ensuring that every model, integration, and automation works reliably and safely within our business processes. Working alongside developers, product owners, and business users, the Analyst will design tests that push the limits of our applications and help deliver trustworthy AI solutions.

Requirements

  • 7+ years in software Quality Assurance/testing roles, with significant experience in enterprise systems or complex integrations required.
  • Proven expertise in test automation (UI and API) and building regression test suites in CI/CD environments.
  • Familiarity with AI/ML system testing principles.
  • Comfortable validating outputs that may vary statistically and determining acceptable accuracy thresholds.
  • Strong analytical skills to identify edge cases and pinpoint defects across interconnected systems (frontend, backend, data, model).
  • Excellent documentation and communication skills for writing test plans, reporting issues, and guiding teams on quality best practices.
  • Test automation tools such as Selenium or Cypress for web interface testing, and Postman or REST-assured for API testing.
  • Testing frameworks like PyTest or JUnit, integrated with CI/CD pipelines (Azure DevOps, Jenkins) for continuous testing.
  • Version control (Git) for maintaining test scripts, and monitoring/logging tools (Azure Monitor, Splunk) to investigate issues.
  • Performance testing or security testing tools for evaluating AI systems under load or threat scenarios.

Nice To Haves

  • Experience in industries like insurance or healthcare is a plus (provides domain context for Allied’s processes).

Responsibilities

  • Test AI features integrated into core systems (e.g., Dynamics CRM, QicLink claims platform, Docuvantage) to ensure they function correctly in end-to-end workflows.
  • Write and automate test cases for AI outputs and traditional software features; define clear pass/fail criteria (using statistical sampling when appropriate) to validate AI results.
  • Own regression testing for AI capabilities: continuously re-run critical test scenarios as models are updated or data changes, to catch issues early.
  • Contribute to the overall test strategy and perform adversarial or edge-case testing (e.g., stress tests, prompt injections) to evaluate the robustness and security of AI components.
  • Document and recommend solutions for issues found; coordinate with the AI Product Owner and SMEs to clarify requirements for fixes and improvements.
  • Collaborate closely with developers and the Product Owner to report bugs, verify fixes, and improve requirements.
  • Emphasize shared accountability for quality across the team.

Benefits

  • Medical
  • Dental
  • Vision
  • Life & Disability Insurance
  • Generous Paid Time Off
  • Tuition Reimbursement
  • EAP
  • Technology Stipend
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service