Senior Engineer, AI Evaluation & Reliability (Agentic AI)

AnomaliRedwood City, CA
43d$140,000 - $190,000Hybrid

About The Position

We're looking for a Senior Engineer, AI Evaluation & Reliability to lead the design and execution of evaluation, quality assurance, and release gating for our agentic AI features. You'll develop the pipelines, datasets, and dashboards that measure and improve agent performance across real-world SOC workflows -- ensuring every release is safe, reliable, efficient, and production-ready. This role partners closely with the Product team to deliver operational excellence and trust in every AI-drive capability. You will guarantee that our agentic AI features operate at full production scale, ingesting and active on millions of SOC alerts per day, with measurable impact on analyst productivity and risk mitigation.

Requirements

  • 5+ years building evaluation or testing infrastructure for ML/LLM systems or large-scale distributes systems.
  • Proven ability to translate product requirements into measurable metrics and test plans.
  • Strong Python skills (or similar language) and experience with modern data tooling.
  • Hands-on experience running A/B tests, canaries, or experiment frameworks.
  • Experience defining and maintaining operational reliability metrics (SLIs/SLOs) for AI-driven systems.
  • Familiarity with large-scale distributed or streaming systems serving AI/agent workflows (millions of events or alerts/day).
  • Excellent communication skills -- able to clearly convey technical results and trade-offs to engineer, PMs, and analysts.
  • This position is not eligible for employment visa sponsorship. The successful candidate must not now, or in the future, require visa sponsorship to work in the US

Nice To Haves

  • Experience evaluating or deploying agentic or tool-using AI systems (multi-agent orchestration, retrieval-augmented reasoning, prompt lifecycle management).
  • Familiarity with LLM evaluation frameworks (e.g., model-graded evals, pairwise/rubric scoring, preference learning).
  • Exposure to AI safety testing, including prompt injection, data exfiltration, abuse taxonomies, and resilience validation.
  • Understanding of explainability and compliance requirements for autonomous workflows, ensuring traceability and auditability of AI behavior.
  • Background in security operations, incident response, or enterprise automation; comfortable interpreting logs, alerts, and playbooks.
  • Startup experience delivering high-impact systems in fast-faced, evolving environments.

Responsibilities

  • Define quality metrics: Translate SOC use cases into measurable KPI's (e.g., precision/recall, MTTR, false-positive rate, step success, latency/cost budgets).
  • Build continuous evaluations: Develop offine/online evaluation pipelines, regression suites, and A/B or canary test; integrate them into CI/CD for release gating.
  • Curate and manage datasets: Maintain gold-standard datasets and red-team scenarios; establish data governance and drift monitoring practices.
  • Ensure safety, reliability, and explainability: Partner with Platform and Security Research to encode guardrails, policy enforcement, and runtime safety checks.
  • Expand adversarial test coverage (prompt injection, data exfiltration, abuse scenarios).
  • Ensure explainability and auditability of agent decisions, maintaining traceability and compliance of AI-driven workflows.
  • Production reliability & observability: Monitor and maintain reliability of agentic AI features post-release -- define and uphold SLIs/SLOs, establish alerting and rollback strategies, and conduct incident post-mortems.
  • Design and implement infrastructure to scale evaluation and production pipelines for real-time SOC workflows across cloud environments.
  • Drive agentic system engineering: Experiment with multi-agent systems, tool-using language models, retrieval-augmented workflows, and prompt orchestration.
  • Manage model and prompt lifecycle -- track version, rollout strategies, and fallbacks; measure impact through statistically sound experiments.
  • Collaborate cross-functionally: Work with Product, UX and Engineering to prioritize high-leverage improvements, resolve regressions quickly, and advance overall system reliability.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

Professional, Scientific, and Technical Services

Education Level

No Education Listed

Number of Employees

251-500 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service