Senior Lead Quantitative Analytics Specialist

Wells Fargo & CompanyIrving, TX
1dHybrid

About The Position

About this role: Wells Fargo is seeking a Senior Lead Quantitative Analytics Specialist to serve as a strategic advisor and technical authority driving the development, governance, and deployment of highly complex quantitative and advanced analytics solutions, and advancing GenAI and Agentic AI capabilities across Commercial Banking. This role sits within Commercial Banking Model Development Center and works in close partnership with Technology, Business, Product, and Risk organizations. This role is pivotal in accelerating the building and adoption of Generative AI and Agentic AI capabilities, providing strategic vision and technical leadership to ensure solutions are robust, scalable, and aligned with regulatory and market expectations. This leader will bring scientific rigor, structured experimentation, and model risk discipline to our GenAI initiatives, ensuring our AI systems are measurable, explainable, responsible, and continuously improving. In this role, you will conduct: Experimentation & Evaluation Drive the development and evaluation of Generative AI and Agentic AI solutions aligned with regulatory, audit, and market expectations Design and lead experiments for GenAI and Agnetic AI use cases Define clear success metrics and evaluation frameworks (quality, hallucination detection, bias, robustness, agent reliability, etc.) Design and implement LLM evaluation frameworks, guardrails, controls, and performance monitoring to ensure reliable and compliant model behavior Conduct error attribution and failure analysis to understand, prevent, and manage agent and model performance issues Conduct complex analysis to determine performance drivers and trade-offs Perform root-cause analysis of model failures and degradation Translate experimental findings into actional implementation requirements for Technology Collaboration & Partnership Partner with Business, Technology, Data, and Risk teams to deliver end‑to-end AI systems Partner closely with Business and AI Product teams to define solutions, conduct experiments, and build out POCs and MVPs in deployable parity environments Partner closely with Tech teams on system architecture, engineering framework, deployment options, ensuring solutions are testable, observable, and measurable, ensure alignment between experimental prototypes and production environments Partner on integrating guardrails, monitoring, and feedback loops into systems AI Risk & Governance Develop testing protocols addressing bias, explainability, stability, drift, and operational risk Partner with Model Risk, Compliance, and Audit to ensure responsible AI deployment, to support transparency, governance, and compliance for quantitative and AI models Document methodologies and development processes to support governance standards Implement monitoring strategies that proactively identify model and agent risks Strategy & Team Leadership Lead and mentor a pod of data scientists to engineer context, orchestrate agent workflows, and build scalable AI solutions Establish experimentation standards and scientific best practices Advise senior leadership on AI performance, limitations, and risk posture Identify high impact GenAI opportunities grounded in measurable outcomes.

Requirements

  • 7+ years of Quantitative Analytics experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education
  • Master's degree or higher in a quantitative discipline such as mathematics, statistics, engineering, physics, economics, or computer science

Nice To Haves

  • Experience leading technical teams or pods
  • Strong expertise in: GenAI experiment design and inference LLM evaluation methodologies and frameworks Model performance diagnostics and failure analysis
  • Experience working with Tech/engineering teams to deploy experimental work into production systems
  • Strong full stack data science skills, hands on experience with ADK, MCP, A2A, O2A.
  • LLM evaluation framework, design and develop LLM guardrails, assertions, controls, and performance monitoring
  • Advanced proficiency in Python, PySpark, TensorFlow, PyTorch, and cloud platforms including Google Cloud Platform (GCP), Vertex AI, Google ADK, and MCP
  • Strong background in GenAI, Agentic workflow, machine learning, deep learning, and statistical analysis evidenced by successful deployment at scale
  • Experience working with LLM-based systems (RAG, tool-use, multi-agent systems)
  • Experience with AI observability and evaluation tooling
  • Experience in the financial services industry or other highly regulated industries, Commercial Banking a plus
  • Familiarity with regulatory, risk, and compliance considerations related to AI and GenAI model development and deployment
  • Strong communication skills with the ability to influence technical and executive audience

Responsibilities

  • Experimentation & Evaluation Drive the development and evaluation of Generative AI and Agentic AI solutions aligned with regulatory, audit, and market expectations
  • Design and lead experiments for GenAI and Agnetic AI use cases
  • Define clear success metrics and evaluation frameworks (quality, hallucination detection, bias, robustness, agent reliability, etc.)
  • Design and implement LLM evaluation frameworks, guardrails, controls, and performance monitoring to ensure reliable and compliant model behavior
  • Conduct error attribution and failure analysis to understand, prevent, and manage agent and model performance issues
  • Conduct complex analysis to determine performance drivers and trade-offs
  • Perform root-cause analysis of model failures and degradation
  • Translate experimental findings into actional implementation requirements for Technology
  • Collaboration & Partnership Partner with Business, Technology, Data, and Risk teams to deliver end‑to-end AI systems
  • Partner closely with Business and AI Product teams to define solutions, conduct experiments, and build out POCs and MVPs in deployable parity environments
  • Partner closely with Tech teams on system architecture, engineering framework, deployment options, ensuring solutions are testable, observable, and measurable, ensure alignment between experimental prototypes and production environments
  • Partner on integrating guardrails, monitoring, and feedback loops into systems
  • AI Risk & Governance Develop testing protocols addressing bias, explainability, stability, drift, and operational risk
  • Partner with Model Risk, Compliance, and Audit to ensure responsible AI deployment, to support transparency, governance, and compliance for quantitative and AI models
  • Document methodologies and development processes to support governance standards
  • Implement monitoring strategies that proactively identify model and agent risks
  • Strategy & Team Leadership Lead and mentor a pod of data scientists to engineer context, orchestrate agent workflows, and build scalable AI solutions
  • Establish experimentation standards and scientific best practices
  • Advise senior leadership on AI performance, limitations, and risk posture
  • Identify high impact GenAI opportunities grounded in measurable outcomes.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service