Manual QA Engineer

KiraNew York, NY
10d$110,000 - $140,000

About The Position

We live in a world where technology is rapidly changing the educational experiences of students and teachers everywhere, and we have the opportunity to shape how this change takes place. Kira's mission is to harness transformative AI technologies to make world-class personalized teaching and learning accessible to everyone. Kira is a rapidly growing startup backed by top-tier Venture Capital funds including New Enterprise Associates (NEA), Andrew Ng’s AI Fund, and Primavera. Quality is critical to ensuring teachers and students can rely on our product every day. ​​As a Manual QA Engineer, you will help ensure a high-quality, reliable experience across our platform. This role is focused on hands-on manual testing, exploratory testing, and validating new features before they reach production. You will work closely with product managers, engineers, and designers to deeply understand new features, identify potential issues early, and advocate for the end-user experience. You will play a key role in verifying functionality across complex workflows, including AI-powered features where outputs may be non-deterministic. This role is ideal for someone who is detail-oriented, curious about how products work, and passionate about delivering high-quality user experiences.

Requirements

  • 2–5+ years of experience in manual software testing or quality assurance
  • Strong experience performing manual regression, exploratory, and functional testing
  • Experience testing web applications and user-facing products
  • Ability to write clear and actionable bug reports
  • Strong debugging skills and ability to work closely with engineers to reproduce issues
  • High attention to detail and strong product intuition
  • Excellent communication skills and ability to collaborate across engineering, product, and design
  • Comfort working in a fast-moving startup environment
  • Strong curiosity and willingness to deeply understand product behavior and edge cases

Nice To Haves

  • Experience testing AI or LLM-powered products
  • Experience testing complex workflows involving APIs, backend systems, or data pipelines
  • Familiarity with basic developer tools (browser dev tools, logs, network inspection)
  • Experience testing distributed systems or modern web stacks

Responsibilities

  • Perform manual testing of new product features across web applications and AI-driven workflows
  • Execute regression, exploratory, and acceptance testing to validate new functionality before release
  • Run sanity checks on staging and release builds to ensure stability prior to deployment
  • Write clear, detailed, and reproducible bug reports, including steps to reproduce, logs, and impact assessments
  • Collaborate closely with engineers to debug issues and verify fixes
  • Develop a deep understanding of the product to identify edge cases, usability issues, and quality risks
  • Validate complex teacher and student workflows, including AI-generated outputs and interactive experiences
  • Evaluate features not only for correctness but also usability, clarity, and user experience
  • Help prioritize testing efforts based on product impact and risk
  • Contribute to QA documentation, testing checklists, and release validation processes
  • Participate in release readiness and post-release validation to ensure product stability

Benefits

  • Competitive compensation and equity. For applicants in the US, the salary range for this role is $110,000 - $140,000 depending on relevant experience and skill set
  • Medical, dental, vision and life insurance, including a medical insurance plan with 100% coverage of employee premiums
  • Flexible PTO policy and company holidays
  • DoorDash credit for lunch daily
  • Dog friendly office 🐶
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service