Chief Architect – AI Threat Detection and Response

Mimecast
1d$228,000 - $342,000Remote

About The Position

The threat landscape is undergoing a fundamental shift. Adversaries are weaponizing AI — using large language models to craft hyper-personalized phishing at scale, injecting malicious instructions into agentic workflows, and deploying deepfake personas to bypass human judgment. Mimecast needs an architect who sees this clearly and knows how to build detection systems that stay ahead of it. The Chief Architect for AI Threat Detection & Response is a senior individual contributor role within the Office of the CTO, with the potential to expand into a managerial role leading a small incubation team of developers as the function matures. You will define the technical blueprint for how Mimecast detects and responds to next-generation threats — combining LLM-based detection, behavioral anomaly models, and AI-specific attack surface coverage — across email, collaboration, and human risk signals at enterprise scale. The Threat Surface You’ll Own This role is explicitly scoped to emerging and AI-driven threats, not just traditional email security. You will architect detection for: AI-generated phishing and BEC — LLM-crafted lures that defeat signature and heuristic-based detection, including persona impersonation and synthetic voice/video in hybrid attacks. Prompt injection attacks — adversarial instructions embedded in emails, documents, or web content designed to hijack Mimecast’s own AI pipelines or customer-deployed LLM agents. Agentic workflow abuse — manipulation of AI agents operating on behalf of users (auto-reply, scheduling, data retrieval) to exfiltrate data or pivot laterally without human interaction. AI-assisted reconnaissance and evasion — attackers using models to profile targets, time campaigns, and dynamically mutate payloads to avoid detection. Deepfake and synthetic identity threats — AI-generated audio, video, or identity signals used in spear-phishing, vishing, and wire fraud scenarios. Model poisoning and adversarial ML — attacks targeting Mimecast’s own detection models through crafted inputs designed to degrade accuracy or induce false negatives.

Requirements

  • 15+ years in security architecture or applied ML, with at least 5 years building production AI/ML detection systems — not just models in research, but systems operating at scale under adversarial conditions.
  • Demonstrated expertise in LLM application design: prompt engineering, RAG architectures, fine-tuning, and the failure modes specific to LLMs deployed in security-critical pipelines.
  • Deep understanding of the AI threat landscape — prompt injection, adversarial ML, model evasion, synthetic content attacks — and the detection approaches that work against each.
  • Hands-on experience with anomaly detection at scale: unsupervised methods (autoencoders, isolation forests, graph anomaly detection), behavioral baselining, and multi-variate signal correlation.
  • Strong command of cloud-native ML infrastructure on AWS — SageMaker, Kinesis, Bedrock, or equivalent — with real architectural opinions on latency, throughput, and cost at email volume.
  • Proven track record of external influence: CVEs, published research, conference presentations, or recognized contributions to AI security standards and threat intelligence.
  • Ability to operate as a senior IC in a PE-backed environment — decisive under ambiguity, outcome-focused, and capable of holding an architectural position with rigor and humility.

Responsibilities

  • LLM-Based Detection Architecture Design and own the architecture for LLM-powered detection pipelines — including prompt design, context assembly, model selection (hosted vs. fine-tuned), and inference cost/latency trade-offs at email scale.
  • Define where LLMs augment vs. replace classical ML models in the detection stack: semantic intent analysis, writing style anomaly, social engineering classification, and zero-day lure identification.
  • Build the adversarial robustness framework for Mimecast’s LLM-based detectors — red-teaming pipelines, prompt injection hardening, and evasion-resistance testing.
  • Establish evaluation methodology for LLM detectors: beyond accuracy metrics to include hallucination rate, decision consistency, and explainability for analyst review.
  • Anomaly Detection & Behavioral Modeling Own the architecture for behavioral baseline modeling across users, communication graphs, and sending infrastructure — enabling detection of deviations that precede BEC, account takeover, and insider threats.
  • Design unsupervised and semi-supervised anomaly detection systems that operate on high-cardinality, sparse behavioral signals without requiring labeled attack data.
  • Architect multi-signal correlation across email, identity, endpoint, and SaaS telemetry to surface low-and-slow attacks invisible to single-channel detectors.
  • Define feedback mechanisms between analyst verdicts and anomaly model recalibration — ensuring drift is detected and baselines evolve with customer communication patterns.
  • AI-Specific Attack Surface Coverage Define Mimecast’s technical posture on prompt injection detection — both as a threat to customer AI deployments and as a risk vector within Mimecast’s own agentic features.
  • Architect detection for agentic workflow abuse scenarios: anomalous agent actions, out-of-policy tool calls, and AI-to-AI communication patterns that indicate compromise.
  • Build the threat model and detection coverage map for synthetic content (deepfakes, AI-generated documents, cloned sender identities) as these become primary attack delivery mechanisms.
  • Engage with the security research community and contribute to emerging standards for AI threat taxonomy, attack surface enumeration, and detection benchmarks.
  • Platform, Standards & Leadership Partner with Platform Services to ensure detection infrastructure — model serving, feature stores, real-time inference — is a first-class component of the Arc platform data fabric.
  • Define engineering standards for model evaluation, adversarial testing, drift monitoring, and incident response when detection models degrade or are actively attacked.
  • Mentor senior engineers and ML practitioners; set the technical bar for the detection organization and act as the internal authority on AI-native threat research.
  • Represent Mimecast externally — at RSAC, Black Hat, and industry forums — as a recognized voice on AI threat detection and the evolving AI attack surface.
  • Threat Evangelization & Intelligence Publishing Own the ‘anatomy of a threat’ narrative for significant detections — translating raw detection data into structured threat breakdowns that explain attack mechanics, targeting patterns, evasion techniques, and impact across the Mimecast customer base.
  • Publish threat intelligence in formats designed for multiple audiences: technical deep-dives for the security research community, threat briefings for enterprise customers and prospects, and sales-ready threat narratives that demonstrate Mimecast’s detection advantage in active campaigns.
  • Work directly with GTM, PMM, and sales engineering to package threat data as evidence of detection efficacy — turning real-world catches into competitive differentiation in RFPs, customer briefings, and analyst interactions.
  • Build and maintain a cadence of threat reporting — monthly threat digests, campaign-specific advisories, and annual threat landscape reports — that establishes Mimecast as a primary source of AI threat intelligence.
  • Closed-Loop Detection Pipeline Architect the end-to-end feedback loop from missed detections and customer-reported false negatives back into the detection engines — ensuring every evasion event is a structured input to model improvement, not a one-off incident.
  • Design the threat intelligence ingestion pipeline: how external threat feeds, analyst verdicts, customer submissions, and Mimecast’s own detection corpus flow into feature engineering, model retraining, and rule updates in a governed, auditable way.
  • Define the data model and tooling for missed detection triage — classification of why a threat was missed (signature gap, model blind spot, novel evasion), routing to the correct remediation path, and tracking time-to-coverage for each gap type.
  • Build the operational cadence around the loop: detection gap reviews, retraining triggers, coverage regression testing, and SLA targets for closing gaps on newly identified attack patterns.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service