Principal AI Security Engineer

Truist BankCharlotte, NC
Onsite

About The Position

The Principal AI Security Engineer is a senior production security engineer responsible for securing AI‑enabled systems across their full lifecycle—from design and development through deployment and production operation. This role focuses on AI‑specific threat models, including prompt injection, unsafe outputs, tool‑use abuse, data leakage, identity misuse, workflow escalation risk, and emergent agent behavior. The engineer designs and enforces guardrails that allow AI capabilities to operate safely within enterprise boundaries without slowing delivery. Daily work includes embedding security controls into agentic workflows, validating AI deployments against Forge security standards, supporting adversarial testing and red‑team exercises, building monitoring and detection content, and partnering with engineering teams to remediate risks before and after release. This is a hands‑on senior role requiring deep judgment, strong engineering execution, and the ability to balance safety, reliability, and delivery speed. For this opportunity, Truist will not sponsor an applicant for work visa status or employment authorization, nor will we offer any immigration-related support for this position (including, but not limited to H-1B, F-1 OPT, F-1 STEM OPT, F-1 CPT, J-1, TN-1 or TN-2, E-3, O-1, or future sponsorship for U.S. lawful permanent residence status.)

Requirements

  • Bachelor’s degree and 10 years of experience in systems engineering or an equivalent combination of education and work experience
  • Strong functional and technical knowledge of information/cyber security capabilities with deep expertise in one or more of the following areas: Encryption, Data Security, Application Security, End Point Security, Identity and Access Management, Windows/Unix/Linux Systems Security, Mainframe Security, Perimeter Security, Network Security, Mobility Security, Cloud Security, Cyber Security, Cryptography, or Authentication SystemsStrong understanding of service lifecycle management, strategic planning, and the cyber security landscape
  • 5+ years of experience in cybersecurity engineering, application security, or platform security roles.
  • Demonstrated experience securing production systems in enterprise environments.
  • Strong understanding of AI / LLM security risks including prompt injection, unsafe output handling, tool ‑ use abuse, and data leakage.
  • Experience implementing security controls for APIs, workflows, automation systems, or distributed services.
  • Experience with logging, monitoring, alerting, and detection engineering.
  • Ability to partner effectively with engineering teams to design practical, implementable security controls.
  • Strong written and verbal communication skills, especially for security findings and remediation guidance.

Nice To Haves

  • Experience securing agentic systems, multi ‑ step AI workflows, or tool ‑ calling architectures.
  • Experience with Microsoft Azure, Copilot / Copilot Studio, or enterprise AI platforms.
  • Experience with adversarial testing, red ‑ team support, or misuse ‑ case modeling for AI systems.
  • Experience in financial services or other highly regulated enterprise environments.
  • Familiarity with identity, access management, secrets handling, and runtime policy enforcement for AI workloads.
  • Experience collaborating with AI QA or evaluation teams on release readiness.

Responsibilities

  • AI Security Engineering Design, implement, and operate security controls for AI ‑ enabled applications, agents, prompts, tools, and workflows.
  • Define and enforce guardrails that mitigate prompt injection, unsafe responses, unauthorized tool execution, and data exposure.
  • Review AI architectures, workflows, and integrations to identify and reduce security risk before deployment.
  • Runtime Controls & Monitoring Build and maintain monitoring, logging, and alerting for AI systems, including prompt behavior, tool invocation, output patterns, and workflow execution.
  • Implement detection content for suspicious or policy ‑ violating AI behavior.
  • Support incident response and investigation for AI ‑ related security events.
  • Governance & Release Readiness Partner with engineering, QA, and platform teams to ensure AI solutions meet Forge deployment and security gate requirements.
  • Validate AI deployments for auditability, traceability, and evidence completeness.
  • Support model, prompt, and workflow change validation from a security risk perspective.
  • Adversarial Testing & Risk Reduction Support adversarial testing, misuse ‑ case validation, and red ‑ team preparation for AI systems.
  • Translate adversarial findings into actionable engineering fixes and control improvements.
  • Continuously evolve threat models as AI capabilities and usage patterns change.
  • Enablement & Standards Provide security guidance to AI, agentic , and application engineers during design and delivery.
  • Contribute reusable security patterns, reference controls, and documentation to the Forge.
  • Mentor junior AI security engineers and elevate overall security maturity.

Benefits

  • All regular teammates (not temporary or contingent workers) working 20 hours or more per week are eligible for benefits, though eligibility for specific benefits may be determined by the division of Truist offering the position.
  • Truist offers medical, dental, vision, life insurance, disability, accidental death and dismemberment, tax-preferred savings accounts, and a 401k plan to teammates.
  • Teammates also receive no less than 10 days of vacation (prorated based on date of hire and by full-time or part-time status) during their first year of employment, along with 10 sick days (also prorated), and paid holidays.
  • Depending on the position and division, this job may also be eligible for Truist’s defined benefit pension plan, restricted stock units, and/or a deferred compensation plan.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service