VP, AI Risk Specialist

UOBPhoenix, AZ
8d

About The Position

About the Function UOB’s AI & Data Risk Governance & Control is a centralized function under Group Risk Management dedicated to sets out a Line 2 operating capability to oversee AI and data risks enterprise‑wide with mandate to ensure AI systems use by the bank are secure, fair, explainable, and well-governed throughout their lifecycle. Role Overview This role oversees AI (including Generative AI and Agentic AI) activities across the organization to manage AI-related risks effectively and responsibly. Play a key role in establishing and maintaining a robust governance framework for AI development, deployment, and monitoring, as well as operationalization of data and AI governance across the Bank. This role will require defining and enforcing validation standards for high-risk use cases, ensuring safety, regulatory compliance, resilience, and ethical conduct in alignment with MAS expectations and industry best practice

Requirements

  • University graduate in Computer Science, Data Science, Statistics, Applied Mathematics, Electrical/Computer Engineering, or a related quantitative field.
  • Minimum 7–10 years of experience in AI or responsible AI, AI & data governance, or related fields, preferably in banking or financial services.
  • Mastery of statistical inference, hypothesis testing, experimental design, power analysis, resampling, and uncertainty quantification (including Bayesian methods).
  • Hands‑on expertise with supervised/unsupervised learning, ensembles/gradient boosting, and neural architectures (CNNs, RNNs, Transformers).
  • Proficiency in regularization, feature selection, score calibration, reject inference, and interpretability across tabular/time‑series/text/graph data.
  • Have experience/ exposure with: LLM fine‑tuning (LoRA/ PEFT), instruction tuning, RLHF/RLAIF, retrieval augmented generation (RAG), prompt design and hardening.
  • Building evaluation harnesses for truthfulness, grounding, toxicity, bias, jailbreak resistance, hallucinations, latency, and cost; set production guardrails for banking use cases.
  • Validating agent workflows with tool use, planning/critique loops, escalation rules, and human in the loop checkpoints, enforce action constraints and auditability.
  • Analyzing autonomy levels, error propagation, and recovery patterns, design safe execution policies for operations.
  • Involved in red‑team exercises for prompt injection/ jailbreaks, data poisoning, evasion, membership inference, and model extraction.
  • Measuring and mitigating bias with group/ individual/ counterfactual fairness metrics, conduct FEAT‑aligned impact assessments for protected classes.
  • Applying SHAP, LIME, Integrated Gradients, counterfactuals, and causal analysis, produce model cards, fairness reports, and decision traceability artifacts.
  • Designing secure prompts/models; output validation, watermarking/traceability, and tool execution guardrails; integrate with TRM and enterprise controls.
  • Threat modeling for AI systems; align with SOC procedures, incident response, and secure SDLC.
  • Translating policy into concrete control requirements, KPIs/KRIs, validation checklists, and audit artifacts; prepare board/regulator reporting.
  • Familiar with Python, SQL, PySpark, PyTorch/ TensorFlow; LLM orchestration (Lang Chain/ Llama Index); vector databases.
  • Familiar with Cloud (AWS/GCP/Azure) and Kubernetes, containerization, secure secrets management, API governance, rate‑limiting, and content filtering.

Nice To Haves

  • Preferrable having certifications in AI ethics/responsible AI, model risk management, cybersecurity, privacy (e.g., PDPA), and governance.

Responsibilities

  • Establish a bank wide independent validation requirement for AI models and agentic systems, covering design, data, training, evaluation, deployment, and postproduction monitoring in a three lines of defense model.
  • Validate high materiality models/ use cases leveraging use of AI including Gen-AI and Agentic AI.
  • Develop enterprise standards for fairness, robustness, reliability, explainability and responsible AI guardrails; implement model risk tiering, control points, and release gates for high impact systems.
  • Operationalize MAS FEAT principles and both applicable internal and regulatory requirements.
  • Conduct deep reviews of data lineage, features, architectures, metrics, and monitoring, challenge design choices.
  • Review AI assessment including prompt injection/jailbreaks, data poisoning, market/behavioral stress scenarios, distribution shift, and failure mode analysis for agentic autonomy and tool use.
  • Enforce documentation (model cards, FEAT assessments, privacy impact assessments), transparency artifacts, audit trails, and production monitoring for drift, fairness, safety, and abuse; drive periodic revalidation.
  • Partner with relevant internal stakeholder and brief as well as interface with MAS and industry bodies (ABS).
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service