AI Quality Engineer

Momentive SoftwareAtlanta, GA
Remote

About The Position

Momentive Software is seeking an AI Quality Engineer to join their team. This role is crucial for ensuring the quality, reliability, and safety of AI systems, particularly Large Language Models (LLMs) and agentic AI systems. The AI Quality Engineer will be responsible for designing and implementing evaluation frameworks, building automated test pipelines, developing tools to detect regressions, and defining key quality metrics. This position requires a strong understanding of software testing principles, hands-on experience with AI systems, and excellent communication skills to collaborate with cross-functional teams and stakeholders. The goal is to build robust AI systems that catch regressions before they reach production, reduce detection time for quality issues, and provide clear quality signals for confident release decisions.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, or equivalent practical experience.
  • 3–5 years of professional software engineering or quality engineering experience.
  • Hands-on experience working with LLMs or agentic AI systems (e.g., GPT-4, Claude, Gemini, or open-source models).
  • Proficiency in Python for scripting, test automation, and data analysis.
  • Experience designing and running evaluations (evals) for generative AI or LLM-powered features.
  • Solid understanding of software testing principles: unit, integration, regression, and end-to-end testing.
  • Familiarity with agentic frameworks and concepts (e.g., tool use, multi-step reasoning, retrieval-augmented generation, memory).
  • Experience with CI/CD pipelines and integrating automated tests into development workflows.
  • Strong analytical skills — able to interpret probabilistic outputs and distinguish meaningful regressions from expected variance.
  • Strong written and verbal communication skills; ability to clearly document findings and present quality data to non-technical stakeholders.
  • Detail-oriented, with a structured approach to exploring edge cases and failure scenarios.
  • Ability to work in a fast-paced environment and manage multiple priorities effectively.

Nice To Haves

  • Experience with prompt engineering and systematic prompt evaluation methodologies.
  • Familiarity with AI safety, alignment, or responsible AI concepts (e.g., hallucination mitigation, bias detection, guardrails).
  • Exposure to agentic orchestration frameworks (e.g., LangChain, LangGraph, AutoGen, CrewAI, or similar).
  • Experience with vector databases or RAG pipelines (e.g., Pinecone, Weaviate, pgvector).
  • Knowledge of observability and monitoring tools for AI systems (e.g., LangSmith, Weights & Biases, Arize).
  • Background in data science or ML experimentation practices.
  • Experience with version control systems (Git) and defect-tracking tools (e.g., Jira).
  • Exposure to cloud platforms (e.g., AWS, Azure, GCP) in the context of deploying or testing AI services.

Responsibilities

  • Design and implement evaluation frameworks (evals) to assess LLM and agentic AI system quality, including accuracy, consistency, safety, and task completion rates.
  • Build and maintain automated test pipelines for AI features, covering unit, integration, and end-to-end scenarios across agentic workflows.
  • Develop tooling to detect regressions in model behavior, prompt outputs, and agent decision-making across releases.
  • Define and track quality metrics for AI systems (e.g., hallucination rates, tool-use accuracy, latency, failure recovery) and surface findings clearly to stakeholders.
  • Collaborate with engineers and product managers to identify edge cases, adversarial inputs, and failure modes specific to multi-step agentic pipelines.
  • Contribute to prompt evaluation strategies, including red-teaming, adversarial testing, and bias/fairness assessments.
  • Participate in design and code reviews with a quality-focused lens, raising concerns about testability and reliability early.
  • Help define and document quality standards and best practices for AI/ML features across the team.
  • Other duties as assigned.

Benefits

  • Medical, Dental & Vision Benefits
  • 401(k) Savings Plan with Company Match
  • Flexible Planned Paid Time Off
  • Generous Sick Leave
  • Inclusive & Welcoming Environment
  • Purpose-Driven Culture
  • Work-Life Balance
  • Commitment to Community Involvement
  • Employer-Paid Parental Leave
  • Employer-Paid Short-Term Disability
  • Remote Work Flexibility
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service