Lead Security Engineer - AI/ML

JPMorgan Chase & Co.•Plano, TX
4h

About The Position

Take on a crucial role where you'll be a key part of a high-performing team delivering secure software solutions. Make a real impact as you help shape the future of software security at one of the world's largest and most influential companies. As a Lead Security Engineer at JPMorgan Chase within the Cybersecurity & Technology Controls for AI/ML, you are an integral part of team that works to deliver software solutions that satisfy pre-defined functional and user requirements with the added dimension of preventing misuse, circumvention, and malicious behavior. As a core technical contributor, you are responsible for carrying out critical technology solutions with tamper-proof, audit defensible methods across multiple technical areas within various business functions.

Requirements

  • Formal training or certification in Public Cloud environment concepts and advanced hands-on experience with cloud-native AI services (e.g., Bedrock).
  • Experience with threat modeling, discovery, vulnerability, and penetration testing (e.g., MITRE ATLAS, OWASP Top 10 for LLMs) and foundational cybersecurity concepts such as IAM, Authentication, OIDC, SAML.
  • Practical experience with Infrastructure as Code (IaC) solutions like Terraform and CloudFormation.
  • Proficiency in Python scripting.
  • Strong understanding of AI/ML concepts and trends, with knowledge of AI red teaming foundational concepts to design and implement exercises for complex AI architectures.
  • Ability to conceptualize, design, validate, and communicate creative technical solutions to enterprise-level security problems, including building internal tools, dashboards, and automation for red teaming activities.

Nice To Haves

  • Expertise in planning, designing, and implementing AI red teaming exercises and enterprise-level security solutions for generative AI, LLMs, and ML systems.
  • Experience with specialized AI security/red teaming tools and frameworks (e.g., PyRIT, Garak, custom LLM evaluation harnesses) and contributions to AI security or open-source security projects.

Responsibilities

  • Develop and enhance security strategies, red teaming programs, and solution designs, while troubleshooting technical issues and creating scalable solutions.
  • Design secure, high-quality AI and software architectures, reviewing and challenging designs and code to ensure adversarial resilience.
  • Reduce AI and LLM security vulnerabilities by adhering to industry standards and emerging AI safety research, evolving policies, testing protocols, and controls.
  • Collaborate with stakeholders across product, data science, cyber, legal, and risk to understand AI use cases and recommend modifications during periods of heightened vulnerability or regulatory change.
  • Conduct discovery, threat modeling, and adversarial testing on generative AI, RAG pipelines, and ML systems to identify vulnerabilities such as prompt injection, jailbreaking, and data poisoning.
  • Provide guidance on secure design, logging, monitoring, and compensating controls for AI applications and platforms.
  • Define and implement AI red teaming methodologies, playbooks, and success metrics, establishing mechanisms for continuous testing and safe rollout of new AI models and features.
  • Work with platform and cloud security teams to ensure secure infrastructure configuration and alignment with enterprise security architecture.
  • Engage with external researchers, vendors, and standards bodies to track emerging AI threats and bring best practices into the organization.
  • Foster a team culture of diversity, equity, inclusion, and respect.
  • Collaborate within a cross-functional team to develop relationships, influence senior stakeholders, and drive alignment on AI risk tolerance and mitigation priorities.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service