Senior AI Security Engineer

Truist BankCharlotte, NC
Onsite

About The Position

The Senior AI Security Engineer helps design, implement, test, and operate the controls that keep enterprise AI systems safe, governed, and production-ready. This role focuses on the security engineering foundations required for AI-enabled applications, agents, prompt-driven workflows, and tool-integrated automations operating in a regulated enterprise environment. This is a hands-on engineering role within the Forge AI Security & Governance model. The engineer supports guardrail implementation, prompt-injection defense, output filtering, monitoring, secure tool-use boundaries, logging, detection content, and deployment-readiness controls for AI-enabled systems. The work spans design, testing, automation, detection engineering, and operational support across the AI delivery lifecycle. Daily work includes implementing security controls for AI and agentic systems, validating configurations, supporting adversarial test preparation, building monitoring logic, partnering with engineering to harden prompt and tool behaviors, documenting controls, and ensuring AI solutions meet enterprise safety, traceability, and governance requirements before and after deployment. For this opportunity, Truist will not sponsor an applicant for work visa status or employment authorization, nor will we offer any immigration-related support for this position (including, but not limited to H-1B, F-1 OPT, F-1 STEM OPT, F-1 CPT, J-1, TN-1 or TN-2, E-3, O-1, or future sponsorship for U.S. lawful permanent residence status.) ESSENTIAL DUTIES AND RESPONSIBILITIES Following is a summary of the essential functions for this job. Other duties may be performed, both major and minor, which are not mentioned below. Specific activities may change from time to time.

Requirements

  • Bachelor’s degree and equivalent combination of advanced education and experience, which could include any combination of 8 years of experience in IT software engineering, 5 years’ relevant business experience (i.e. making technical-related decisions on the business side), 5 years’ experience in project management, and at least 2 years of management experience
  • Broad and in-depth knowledge of technology trends, competitive environment, regulatory requirements and trends, and IT strategies employed to continually meet the demands of clients and regulators
  • Ability to translate enterprise level strategic planning information into software and data management needs, create business plans, and turn them into effective business solutions
  • Executive level communications skills, including, strong negotiation/facilitation/presentation skills and experience negotiating with vendors for relevant products and services
  • Ability to lead projects of significant complexity and risk exposure, particularly with enterprise-wide implications
  • Ability to exercise judgment in solving technical, operational, and organizational challenges in the context of complex business objectives and priorities
  • Ability to lead and manage the performance of multiple teams against a set of financial and operational objectives
  • 3+ years of experience in security engineering, cybersecurity operations, application security, or a closely related technical discipline.
  • Hands-on experience implementing technical controls for enterprise software, APIs, cloud-native services, or automation workflows.
  • Working knowledge of AI/LLM security concepts such as prompt injection, unsafe output handling, tool-use abuse, sensitive data exposure, and control boundary enforcement.
  • Experience with logging, alerting, monitoring, or detection content for identifying suspicious or policy-violating behavior in applications or workflows.
  • Understanding of access control, identity boundaries, secrets handling, secure integration design, and environment-based deployment controls.
  • Ability to work with engineering teams to translate security concerns into implementable guardrails, validations, and release controls.
  • Strong written documentation and communication skills, especially for controls, findings, remediation evidence, and technical guidance.
  • Experience operating within enterprise governance, security, and release-management practices where evidence-based deployment readiness matters.

Nice To Haves

  • Experience with AI or agentic security controls, prompt and output protection strategies, or security validation of LLM-enabled features.
  • Experience with Microsoft, Azure, Copilot / Copilot Studio, or AI-enabled enterprise workflow platforms.
  • Experience with adversarial testing, red teaming support, detection engineering, or misuse-case validation for AI-enabled systems.
  • Experience in financial services, cybersecurity, regulated enterprise environments, or platforms with high audit and control requirements.
  • Familiarity with secure tool-calling patterns, API protections, model or prompt change validation, and runtime traceability for AI systems.
  • Working knowledge of cloud-native security patterns, telemetry analysis, and deployment gating for modern engineering teams.

Responsibilities

  • Implement and maintain security controls for AI-enabled applications, agents, prompts, and workflow automations, including input validation, output filtering, access controls, and governed tool-use restrictions.
  • Support development and operationalization of AI guardrails that reduce risk from prompt injection, unsafe responses, unauthorized tool execution, data leakage, and insecure workflow behavior.
  • Assist with adversarial and misuse-oriented testing activities by preparing scenarios, validating defenses, capturing findings, and supporting remediation with engineering teams.
  • Build and maintain monitoring and alerting logic for AI or agentic systems, including suspicious prompt patterns, abnormal workflow behavior, anomalous tool invocation, or policy-violating output patterns.
  • Contribute to runtime safety patterns such as content filters, role/permission boundaries, logging, evidence capture, and traceability controls required for secure deployment and audit readiness.
  • Partner with product and engineering teams to embed AI security expectations into design reviews, acceptance criteria, test plans, and deployment gates from the beginning of the delivery cycle.
  • Validate that AI-enabled solutions meet required security and governance standards before release, including deployment gate criteria, control validation, and documentation completeness.
  • Support incident investigation and remediation for AI-related control failures, suspicious behavior, or production observations that indicate safety or security degradation.
  • Help maintain control documentation, implementation notes, runbooks, remediation evidence, and operating procedures for AI security engineering activities.
  • Collaborate with AI quality, QA, platform, data, and security teams to ensure safe, resilient, and observable AI solutions across development and production environments.
  • Continuously improve automation, guardrail logic, validation workflows, and monitoring content as AI capabilities, workflows, and attack patterns evolve.

Benefits

  • Truist offers medical, dental, vision, life insurance, disability, accidental death and dismemberment, tax-preferred savings accounts, and a 401k plan to teammates.
  • Teammates also receive no less than 10 days of vacation (prorated based on date of hire and by full-time or part-time status) during their first year of employment, along with 10 sick days (also prorated), and paid holidays.
  • Depending on the position and division, this job may also be eligible for Truist’s defined benefit pension plan, restricted stock units, and/or a deferred compensation plan.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service