AI Cybersecurity Engineer

GrailMenlo Park, CA
Hybrid

About The Position

GRAIL is a healthcare company pioneering new technologies to advance early cancer detection. We are looking for a collaborative and forward-thinking AI Cybersecurity Engineer to help lead the design and implementation of our Cybersecurity Program. In this role, you will work closely with teams across the company to ensure our use of AI—large language models (LLMs), ML pipelines, commercial AI platforms, and AI‑enabled applications—is secure, responsible, and aligned with our organizational values. You will also contribute broadly to cloud, application, and platform security initiatives. You’ll partner with Data Security, Engineering, Architecture, Legal/Compliance, and business stakeholders to ensure our AI adoption is responsible, resilient, and secure by design. This is an opportunity to define foundational controls for a rapidly evolving domain. We are looking for you to bring curiosity, a security engineering foundation, and the ability to work with diverse stakeholders. You will be responsible for detecting, analyzing, and neutralizing sophisticated cyber threats while proactively gathering intelligence to predict future attacks. This is a leadership role requiring a balance of deep technical expertise in defensive operations and the ability to communicate risk to senior leadership and stakeholders. This role requires more than technical proficiency. We are looking for a leader who models GRAIL’s core values, embodies our LEAD leadership attributes, and delivers results with integrity, inclusivity, and strategic insight.

Requirements

  • Strong hands-on experience with AI/ML technologies, LLMs, or AI development tools
  • 3–5+ years of experience in security engineering, application security, or cloud security
  • Experience performing threat modeling, security architecture design, and secure code review or testing
  • Experience developing AI solutions within IDEs, utilizing AI code assistants
  • Experience working with LLM APIs (OpenAI, Anthropic, etc.)
  • Familiarity with AI frameworks such as LangChain, LlamaIndex, or similar
  • Understanding of AI/ML lifecycle and prompt engineering
  • Familiarity with AI security risks such as prompt injection, data leakage, and model misuse
  • Experience working in cloud environments (AWS, Azure, or GCP)
  • Familiarity with secure development practices (DevSecOps)
  • Working knowledge of OWASP Top 10 and application security principles
  • Strong collaboration and communication skills

Nice To Haves

  • Experience with agentic and Model Context Protocol (MCP) architectures.
  • Expertise in Python, R, Java, or similar programming languages.
  • Experience in GCP or AWS cloud-native services, architectures, and tools.
  • Advanced knowledge of security and governance frameworks (NIST AI-RMF, ISO 42001, OWASP Top 10 for LLM).

Responsibilities

  • Build and maintain a secure reasoning layer for GRAIL data strategy, moving security from a concept to a functional necessity within business workflows.
  • Develop and refine healthcare-specific security detection models (e.g., Content Safety Classifiers, Behavioral / Alignment Monitoring Models) that outperform domain-specific blind spots.
  • Implement and manage cryptographic Private Information Retrieval (PIR) systems (such as SealPIR, XPIR, or CPIR) to protect access patterns over large-scale patient record datasets.
  • Detects and prevents exposure of sensitive data (PII, secrets, enterprise data).
  • Design data-layer protections, including bilinear pairing checks and cryptographic receipts, to ensure any server-side tampering is detected instantly.
  • Deploy and maintain Terraform IaC across AWS multi-cloud environments, ensuring VPC isolation and continuous threat exposure monitoring.
  • Utilize XAI tools like LIME and SHAP to analyze model failure modes, ensuring that security controls do not inadvertently cause HIPAA availability violations or disrupt care coordination.
  • Design, build, and support AI/ML solutions and integrations across the enterprise.
  • Evaluate and secure AI platforms, LLMs, Claude, Gemini, ChatGPT and AI-powered development tools (e.g., GitHub, OKTA, PaloAlto) in AWS Bedrock.
  • Lead development of AI security controls, guardrails, and governance frameworks.
  • Perform threat modeling and risk assessments for AI/ML systems and integrations.
  • Partner with engineering teams to enable secure AI development practices, including prompt engineering, API security, and data protection.
  • Assess and mitigate risks related to LLMs, including prompt injection, model leakage, and data exposure.
  • Contribute to secure architecture patterns for AI-enabled applications and services.
  • Support security reviews, testing, and validation of AI use cases and implementations.
  • Collaborate with cloud, data, and application teams to ensure secure deployment of AI capabilities.
  • Evaluate and onboard AI vendors and tools, ensuring alignment with security, privacy, and compliance requirements.
  • Promote awareness and adoption of secure AI usage practices across the organization.
  • Remain current on emerging AI and security risks, trends, and technologies.
  • Ensure alignment and compliance with industry standards (NIST AI-RMF, ISO 42001, OWASP Top 10 for LLMs) and advanced security architectures (Agentic, MCP).

Benefits

  • flexible time-off or vacation
  • a 401(k) retirement plan with employer match
  • medical, dental, and vision coverage
  • carefully selected mindfulness programs

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

No Education Listed

Number of Employees

501-1,000 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service