AI Cybersecurity Engineer

GRAILMenlo Park, CA
Hybrid

About The Position

GRAIL is a pioneering healthcare company focused on early cancer detection using advanced technologies like next-generation sequencing, population-scale clinical studies, and cutting-edge computer and data science. We are seeking a collaborative and forward-thinking AI Cybersecurity Engineer to join our team. This role is crucial for designing and implementing our Cybersecurity Program, ensuring the secure and responsible use of AI technologies including large language models (LLMs), ML pipelines, and AI-enabled applications. You will work across various teams, including Data Security, Engineering, Architecture, Legal/Compliance, and business stakeholders, to embed security by design into our AI adoption strategies. This leadership position requires a blend of deep technical expertise in defensive operations, the ability to communicate risk effectively to senior leadership, and a commitment to GRAIL's core values and leadership attributes. The role is based in Menlo Park, California, with a move to Sunnyvale, California in Fall 2026, and offers a flexible work arrangement with a minimum of 60% on-site presence required.

Requirements

  • Strong hands-on experience with AI/ML technologies, LLMs, or AI development tools
  • 3–5+ years of experience in security engineering, application security, or cloud security
  • Experience performing threat modeling, security architecture design, and secure code review or testing
  • Experience developing AI solutions within IDEs, utilizing AI code assistants
  • Experience working with LLM APIs (OpenAI, Anthropic, etc.)
  • Familiarity with AI frameworks such as LangChain, LlamaIndex, or similar
  • Understanding of AI/ML lifecycle and prompt engineering
  • Familiarity with AI security risks such as prompt injection, data leakage, and model misuse
  • Experience working in cloud environments (AWS, Azure, or GCP)
  • Familiarity with secure development practices (DevSecOps)
  • Working knowledge of OWASP Top 10 and application security principles
  • Strong collaboration and communication skills

Nice To Haves

  • Experience with agentic and Model Context Protocol (MCP) architectures.
  • Expertise in Python, R, Java, or similar programming languages.
  • Experience in GCP or AWS cloud-native services, architectures, and tools.
  • Advanced knowledge of security and governance frameworks (NIST AI-RMF, ISO 42001, OWASP Top 10 for LLM).

Responsibilities

  • Build and maintain a secure reasoning layer for GRAIL data strategy, moving security from a concept to a functional necessity within business workflows.
  • Develop and refine healthcare-specific security detection models (e.g., Content Safety Classifiers, Behavioral / Alignment Monitoring Models) that outperform generic models by minimizing domain-specific blind spots.
  • Implement and manage cryptographic Private Information Retrieval (PIR) systems (such as SealPIR, XPIR, or CPIR) to protect access patterns over large-scale patient record datasets. Detects and prevents exposure of sensitive data (PII, secrets, enterprise data).
  • Design data-layer protections, including bilinear pairing checks and cryptographic receipts, to ensure any server-side tampering is detected instantly.
  • Deploy and maintain Terraform IaC across AWS multi-cloud environments, ensuring VPC isolation and continuous threat exposure monitoring.
  • Utilize XAI tools like LIME and SHAP to analyze model failure modes, ensuring that security controls do not inadvertently cause HIPAA availability violations or disrupt care coordination.
  • Design, build, and support AI/ML solutions and integrations across the enterprise.
  • Evaluate and secure AI platforms, LLMs, Claude, Gemini, ChatGPT and AI-powered development tools (e.g., GitHub, OKTA, PaloAlto) in AWS Bedrock.
  • Lead development of AI security controls, guardrails, and governance frameworks.
  • Perform threat modeling and risk assessments for AI/ML systems and integrations.
  • Partner with engineering teams to enable secure AI development practices, including prompt engineering, API security, and data protection.
  • Assess and mitigate risks related to LLMs, including prompt injection, model leakage, and data exposure.
  • Contribute to secure architecture patterns for AI-enabled applications and services.
  • Support security reviews, testing, and validation of AI use cases and implementations.
  • Collaborate with cloud, data, and application teams to ensure secure deployment of AI capabilities.
  • Evaluate and onboard AI vendors and tools, ensuring alignment with security, privacy, and compliance requirements.
  • Promote awareness and adoption of secure AI usage practices across the organization.
  • Remain current on emerging AI and security risks, trends, and technologies.
  • Ensure alignment and compliance with industry standards (NIST AI-RMF, ISO 42001, OWASP Top 10 for LLMs) and advanced security architectures (Agentic, MCP).

Benefits

  • flexible time-off or vacation
  • a 401(k) retirement plan with employer match
  • medical, dental, and vision coverage
  • carefully selected mindfulness programs

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

No Education Listed

Number of Employees

501-1,000 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service