About The Position

EY is seeking a Forward Deployed Engineer - Applied AI - Manager for its Financial Services Office (FSO). This role involves leading the delivery of solution or infrastructure development services for large or complex AI/ML initiatives, applying strong technical capability and hands-on engineering experience. The position requires accountability for the design, development, delivery, and maintenance of AI-enabled solutions or infrastructure, ensuring compliance with engineering standards. The role involves understanding business and user requirements, translating them into effective design specifications, and owning the implementation and integration of AI/ML capabilities into broader enterprise solutions with a focus on reliability, scalability, user impact, and successful project delivery. Financial institutions are looking to build smarter and more efficient ways to operate their business, create new revenue streams, and better manage risk through new opportunities uncovered by their data. This role is crucial in helping clients unlock the potential of Artificial Intelligence by focusing on its application and strategy for enterprise transformation.

Requirements

  • Ability to understand complex business challenges across banking, capital markets, insurance, and asset management and translate them into LLM-powered solutions that deliver measurable business value
  • Practical experience leading and managing multi-disciplinary teams through the full AI product lifecycle — requirements, architecture, build, evaluation, and production handoff
  • Demonstrated experience managing and mentoring teams of AI engineers and data scientists through the execution of specific business use cases, ensuring technical quality and delivery consistency across engagements
  • Advanced hands-on software engineering proficiency in Python, with the credibility to guide implementation decisions as well as architecture across delivery teams
  • Demonstrated experience architecting and delivering production-grade LLM applications including retrieval-augmented systems, agentic orchestration layers, and structured output pipelines at enterprise scale (e.g. LlamaIndex, LangChain, Azure OpenAI, AWS Bedrock)
  • Strong knowledge of embedding models, vector search, semantic retrieval, and NLP similarity systems used in enterprise RAG and knowledge AI architectures (e.g. OpenAI Embeddings, Cohere Embed, Azure AI Search, FAISS etc.)
  • Deep expertise in LLM Ops practices including model lifecycle management, versioning, CI/CD for AI systems, deployment governance, and continuous improvement loops in production environments (e.g. MLflow, Azure ML, GitHub Actions, Kubeflow etc.)
  • Expertise in agentic system architecture including multi-agent orchestration, tool use patterns, memory design, and human-in-the-loop workflows for high-stakes production environments (e.g. LangGraph, AutoGen, Semantic Kernel, CrewAI, NVIDIA NIM etc.)
  • Experience governing agent behavior in production environments including audit trail design, cost and latency controls, and reliability management across complex multi-agent pipelines
  • Demonstrated exploration of new LLM techniques and emerging agentic patterns, with the ability to assess their applicability to client challenges and translate them into practical delivery approaches
  • Experience defining and governing LLM evaluation frameworks across teams and engagements, ensuring consistent measurement of output quality, safety, and alignment with business requirements (e.g. RAGAS, DeepEval, Arize, Weights & Biases etc.)
  • Ability to drive performance, resilience, maintainability, and cost efficiency improvements in deployed LLM and agentic systems, including post-deployment optimization and operational tuning
  • Knowledge of MLOps practices for continuous integration and continuous deployment of AI systems in cloud environments, including containerization and orchestration for scalable and secure LLM deployment (Azure DevOps, GitHub Actions, Kubeflow, MLFlow etc.)
  • Experience governing API design standards for LLM and agentic systems including contract design, versioning, error handling, retry semantics, and decoupling of AI service consumers from internal model and workflow topology
  • Strong system design capability across service boundaries, asynchronous workflows, data contracts, cloud-native patterns, and secure deployment models for AI-enabled applications
  • Proficiency in containerization and orchestration for deploying and managing scalable LLM applications in production cloud environments (e.g. Docker, Kubernetes, Azure Container Apps, AWS ECS etc.)
  • Ability to collaborate with data engineers, ML engineers, and business stakeholders to align LLM solution design with enterprise data and technology constraints
  • Clear communicator able to explain complex AI system behavior and trade‑offs to technical and non‑technical stakeholders, including risk and compliance.
  • Strong ownership and accountability, taking responsibility for AI systems from design through production and issue resolution.
  • Comfort with ambiguity, able to operate effectively as requirements, regulations, and technologies evolve.
  • Collaborative and cross‑functional, working closely with engineering, product, risk, legal, and audit teams.
  • Sound judgment in regulated environments, with awareness of risk, controls, and when human oversight is required.
  • 7+ years of applied engineering experience, including significant experience in AI/ML engineering roles.

Nice To Haves

  • Experience advising clients on AI platform and infrastructure strategy including model access layer selection, build-vs-buy decisions, and integration with existing data and technology infrastructure (e.g. Azure OpenAI, AWS Bedrock,Google Vertex AI, NVIDIA AI Enterprise, Hugging Face etc.)
  • Ability to quantify business improvement resulting from LLM solutions through defined evaluation metrics, performance benchmarks, and client-facing reporting
  • Strong ability to design and govern model observability and monitoring strategies across engagements, covering output quality, behavioral drift, and multi-step agentic workflow tracing (e.g. LangSmith, Arize, Datadog, Azure Monitor etc.)
  • Understanding of LLM fine-tuning methodologies and the ability to advise clients on when and how to apply them, including data preparation, training approaches, and post-training evaluation (e.g. LoRA, QLoRA, PEFT, NeMo Framework etc.)
  • Experience leading controlled model rollout programs including shadow deployment, A/B testing, canary releases, and stakeholder sign-off processes with defined rollback criteria
  • Familiarity with AI security risks specific to LLM systems including prompt injection, data poisoning, and model extraction, and the ability to advise on mitigation and audit trail requirements
  • Familiarity with bias, fairness, and explainability approaches and their application in financial services AI systems
  • Familiarity with system design principles for AI — scalability, fault tolerance, and distributed architecture for production AI workloads
  • Familiarity with data pipeline architecture for enterprise AI workloads including ingestion, transformation, and governance
  • Understanding of data security and privacy best practices in cloud environments as they apply to LLM application development and deployment
  • Familiarity with AI-assisted software engineering tools as part of delivery leadership and engineering execution (e.g. Claude Code, GitHub Copilot, Codex etc.)
  • Familiarity with GPU-accelerated AI workloads and cloud AI services for model inference and deployment at scale (e.g. NVIDIA GPU platforms, Azure ML, AWS SageMaker etc.)
  • Familiarity with agile and modern engineering delivery methodologies as applied to AI/ML initiatives
  • Master’s degree in Business Administration (MBA) or Science (MS) preferred
  • Prior consulting experience

Responsibilities

  • Design, develop, test, deploy, and support production-grade AI/ML, generative AI, and intelligent automation solutions.
  • Solve complex technical problems through coding, debugging, testing, troubleshooting, and structured design remediation.
  • Translate business and user requirements into sound technical designs, APIs, workflows, and supportable implementation patterns.
  • Build and integrate LLM, RAG, and agentic solution components into enterprise applications and platforms.
  • Contribute to system design across service boundaries, orchestration layers, data flows, security controls, and external integrations.
  • Lead workstreams or project delivery responsibilities through planning, coordination, execution oversight, issue management, and stakeholder communication.
  • Drive engineering quality through strong coding standards, CI/CD practices, automated testing, observability, and documentation.
  • Partner with Development, Engineering, Product, Data, Architecture, and engagement leadership teams to deliver high-value AI capabilities.
  • Improve performance, resilience, maintainability, and cost efficiency of deployed AI systems.
  • Participate in architecture and design reviews, providing thoughtful trade-off analysis and implementation guidance.
  • Use modern AI-assisted software engineering tools such as Claude Code, Codex, or equivalent agentic coding platforms as part of delivery leadership and engineering execution.

Benefits

  • Comprehensive compensation and benefits package
  • Medical and dental coverage
  • Pension and 401(k) plans
  • Wide range of paid time off options
  • Flexible vacation policy
  • Designated EY Paid Holidays
  • Winter/Summer breaks
  • Personal/Family Care leave
  • Other leaves of absence when needed to support physical, financial, and emotional well-being
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service