Applied AI Analyst

Metropolitan Commercial BankNew York, NY
$120,000 - $150,000Hybrid

About The Position

Metropolitan Commercial Bank (the “Bank”) is seeking an Applied AI Analyst to support the design, configuration, testing, and monitoring of applied AI, Generative AI, machine learning, and agentic workflow solutions in a highly regulated banking environment. Working under the guidance of AI Scientists and in partnership with engineering, risk, and business teams, this role focuses on internal human-in-the-loop use cases such as AI-assisted loan documentation generation, fraudulent transaction detection models, policy and knowledge copilots, workflow automation, and other productivity solutions. A meaningful portion of the role is dedicated to AI solution intake and delivery support— learning new business domains, engaging business owners, preparing required documentation, and supporting vendor and risk reviews, so that solutions can be built and deployed under strict control gates and documentation standards aligned to internal and external regulations. The role emphasizes strong documentation, human oversight, privacy-by-design, cybersecurity, and disciplined lifecycle controls aligned to internal and external regulations. The role also emphasizes familiarity with Microsoft Foundry / Foundry Agent Service, approved Microsoft Copilot ecosystems, RAG architectures, and enterprise data platforms such as Snowflake. Standard 4-day in-office requirement, 1 day remote (of your choosing)

Requirements

  • 2+ years of work experience.
  • Bachelor’s degree in Computer Science, Data Science, Statistics, Engineering, Information Systems, Mathematics, or a related field is preferred. Relevant internships, apprenticeships, or comparable hands-on AI, analytics, or automation project experience will be considered.
  • Working knowledge of Python, SQL, notebooks, REST APIs, JSON/YAML, and version control (e.g., Git) for data analysis and AI workflow support.
  • Understanding of traditional machine learning model development and lifecycle concepts, including model training, data leakage prevention, testing/validation, monitoring, and common performance metrics for classification and regression.
  • Familiarity with LLM application patterns, including prompt engineering, structured outputs, function/tool calling, Retrieval-Augmented Generation (RAG), embeddings, and vector search.
  • Familiarity with Microsoft Foundry (including Foundry Agent Service), Microsoft 365 Copilot / Copilot Studio concepts, and approved enterprise data/AI platforms such as Snowflake.
  • Understanding of agentic workflow concepts such as MCP servers/tools, skills/plugins, subagents or agents-as-tools, handoffs, human-in-the-loop controls, and A2A integration patterns.
  • Ability to test and evaluate AI outputs for accuracy, groundedness, hallucination, retrieval quality, bias/fairness, and basic security misuse scenarios.
  • Working knowledge of model and data governance concepts, including documentation, monitoring, change control, inventories, audit trails, and three-lines-of-defense oversight in a regulated environment.
  • Strong written and verbal communication skills; ability to turn technical findings into clear summaries, test evidence, and action items for business and risk stakeholders.
  • Analytical, organized, and adaptable mindset; ability to learn quickly, manage multiple workstreams, and balance innovation with risk discipline.

Nice To Haves

  • Financial services exposure in fraud, AML/KYC/CDD/EDD, underwriting, commercial banking, treasury, policy governance, or contact center analytics.
  • Hands-on experience with Microsoft Foundry / Foundry Agent Service, Azure AI Search or equivalent retrieval tooling, M365 Copilot / Copilot Studio, Snowflake, or similar enterprise AI platforms is a plus.
  • Experience with RAG architectures, vector databases / vector search, document chunking, metadata extraction, OCR/document understanding, and prompt engineering.
  • Exposure to agentic workflow patterns such as MCP tool integration, skills/plugins, handoffs, subagents, A2A, and evaluation frameworks for task completion and tool-call quality.
  • Familiarity with LangChain, Semantic Kernel, LangGraph, Microsoft Agent Framework, or similar orchestration libraries/frameworks is a plus.
  • Awareness of external regulations including SR 11-7, SR 23-4, AI use-case intake / inventory processes, privacy, cybersecurity, and third-party risk expectations in a regulated environment.
  • Ability to work in a constantly evolving environment.
  • Must have excellent written and verbal communication skills.
  • Demonstrate analytical, troubleshooting, and problem-solving skills.
  • The ability to learn new technologies quickly.
  • Self-directed individual with strong organizational and communication skills.
  • Ability to synthesize multiple sources of information with an understanding of the bigger picture needs and operations of the Bank.
  • Collaborative team player who can find practical solutions in a dynamic work environment.
  • Ability to handle ambiguity, manage multiple tasks at once, and shift effectively between priorities.

Responsibilities

  • Support AI Scientists in building, configuring, and testing applied AI/GenAI solutions for high-value banking use cases such as credit memo generation, policy/regulatory summarization, knowledge assistants, and internal productivity copilots.
  • Develop and refine prompts, grounding instructions, structured outputs, evaluation datasets, and Retrieval-Augmented Generation (RAG) pipelines using embeddings, vector search, document chunking, and metadata-driven retrieval.
  • Prepare and transform structured and unstructured data; assist with API integrations, notebooks, and repeatable workflows that improve quality, traceability, and analyst productivity.
  • Assist in designing and testing agentic workflows using approved enterprise platforms and frameworks, including multi-agent patterns, human-in-the-loop approvals, handoffs, and subagent / agents-as-tools designs.
  • Work with emerging interoperability patterns such as Model Context Protocol (MCP), tool / skill / plugin integration, and Agent-to-Agent (A2A) connectivity under supervision and within approved guardrails.
  • Document workflow steps, tool schemas, fallback logic, escalation paths, and safe disablement / rollback procedures for agentic and GenAI solutions.
  • Support prototyping and evaluation in Microsoft Foundry / Foundry Agent Service and related Microsoft ecosystems (for example, Microsoft 365 Copilot) while partnering with Engineering and Data teams operating on Snowflake and other approved enterprise platforms.
  • Assist with prompt and agent configuration, retrieval integration, logging, model / tool versioning, and operational runbooks.
  • Create and execute test cases for accuracy, groundedness, hallucination, task completion, retrieval quality, bias/fairness, and basic adversarial or prompt-injection scenarios; escalate issues promptly.
  • Track KPIs/KRIs, output quality, drift indicators, and user feedback; maintain evidence needed for pilot reviews, production monitoring, and periodic reassessment.
  • Support AI use-case intake and governance processes by preparing required documentation (e.g., AI intake forms, model/system inventories, change-control artifacts, and audit-ready evidence packs) aligned to MCB’s Trustworthy & Responsible AI Principles and internal approval processes.
  • Learn domain and process context for new use cases by speaking with business owners and control partners; translate requirements into clear problem statements, data needs, user journeys, control gates, and documentation required to comply with internal and external regulations.
  • Support third-party / AI vendor due diligence by collecting and organizing evidence, reviewing vendor documentation, and helping risk-tier AI vendors and solutions in partnership with Third-Party Risk and Cyber/IT; act as an AI subject matter expert in providing an understanding of AI-specific risks.
  • Apply privacy-by-design, data minimization, access-control, and secure prompt/data handling practices; follow approved tooling and build standards.
  • Communicate findings, limitations, and recommendations clearly to AI Scientists, managers, control partners (Model Risk, Compliance/Legal, Cyber/IT, Data Privacy), and business stakeholders; incorporate feedback quickly and accurately.
  • Contribute to playbooks, standard operating procedures, and reusable templates for prompts, evaluations, agent patterns, and workflow controls.
  • Stay current on practical applied-AI methods—including RAG, evaluation frameworks, agentic workflow design, MCP, A2A, and Microsoft Foundry capabilities—and recommend fit-for-purpose uses under established control gates.
  • Demonstrate sound judgment, curiosity, and willingness to learn from senior AI Scientists while promoting responsible AI, reproducibility, and disciplined execution.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service