About The Position

EY is the only professional services firm with a separate business unit (“FSO”) that is dedicated to the financial services marketplace. Our FSO teams have been at the forefront of every event that has reshaped and redefined the financial services industry. If you have a passion for rallying together to solve the most complex challenges in the financial services industry, come join our dynamic FSO team! The business problems our clients are facing today are not the same problems they have faced in the past. The rapid pace of development in Artificial Intelligence and the technology that enables it has created an urgent need to innovate and adapt to the new global business paradigm. Financial institutions are looking to build smarter and more efficient ways to operate their business, create new revenue streams, and better manage risk, through new opportunities uncovered by their data. We believe that to fully unlock the potential of Artificial Intelligence, we need to look not only at its application, but also at the strategy level for how best to transform the enterprise into one that is technology and data focused and ready for the new age. Our clients’ problems are becoming increasingly complex while at the same time the need to automate and streamline is rising.

Requirements

  • Ability to understand business challenges and translate them into value-add AI solutions leveraging large language models and intelligent automation
  • Experience designing, building, and maintaining production-grade LLM applications, including end-to-end pipelines from data ingestion through model output delivery (e.g. Azure OpenAI, AWS Bedrock, Google Vertex AI)
  • Demonstrated experience building retrieval-augmented systems that ground model outputs in enterprise knowledge sources, including chunking strategies, embedding pipelines, and retrieval optimization (e.g. LlamaIndex, LangChain, Pinecone, Weaviate, Azure AI Search, pgvector etc.)
  • Knowledge of embedding models, vector search, and semantic retrieval patterns used to ground LLM outputs in enterprise knowledge sources (e.g. OpenAI Embeddings, Azure AI Search, pgvector etc.)
  • Proficiency in prompt engineering techniques including zero-shot, few-shot, chain-of-thought, and structured output design, with the ability to systematically evaluate and iterate on prompt performance(e.g. DSPy, PromptFlow etc.)
  • Experience designing and building agentic systems including multi-agent orchestration patterns, tool use, and memory design across single and multi-step workflows (e.g. LangGraph, AutoGen, CrewAI, Semantic Kernel, NVIDIA NIM etc.)
  • Ability to build reliable agent loops including failure handling, retries, fallbacks, and context window management across complex multi-step agentic workflows
  • Ability to debug, troubleshoot, and remediate production LLM and agentic systems including failure diagnosis across retrieval, orchestration, and generation layers
  • Experience designing and implementing LLM evaluation frameworks covering functional correctness, output quality, safety, and business-defined KPIs (e.g. RAGAS, DeepEval, Arize Phoenix etc.)
  • Hands-on software engineering proficiency in Python, with the ability to write clean, modular, production-quality code for LLM pipelines and agentic applications
  • Experience working with structured and unstructured data sets to support LLM application development, including data curation, preparation, and quality validation for model inputs
  • Familiarity with RESTful and event-driven API patterns including asynchronous workflows, service boundaries, and integration of enterprise data sources to expose LLM and agentic capabilities
  • Familiarity with containerization and orchestration concepts for packaging and deploying LLM applications in cloud environments (e.g. Docker, Kubernetes, Azure Container Apps, AWS ECS etc.)
  • Understanding of software engineering best practices as applied to ML systems, including modular code design, testing patterns for AI pipelines, and data quality validation
  • Clear communicator able to explain complex AI system behavior and trade‑offs to technical and non‑technical stakeholders, including risk and compliance.
  • Strong ownership and accountability, taking responsibility for AI systems from design through production and issue resolution.
  • Comfort with ambiguity, able to operate effectively as requirements, regulations, and technologies evolve.
  • Collaborative and cross‑functional, working closely with engineering, product, risk, legal, and audit teams.
  • Sound judgment in regulated environments, with awareness of risk, controls, and when human oversight is required.
  • 3+ years of applied engineering experience, including meaningful experience in AI/ML engineering roles.

Nice To Haves

  • Ability to build and maintain model observability pipelines including tracing of multi-step agentic reasoning chains, output degradation detection, and behavioral drift monitoring in production (e.g. LangSmith, Arize, Datadog, Azure Monitor etc.)
  • Familiarity with LLM fine-tuning approaches including instruction tuning and preference optimization, with an understanding of when fine-tuning is appropriate versus prompt-based solutions (e.g. LoRA, QLoRA, PEFT, NeMo Framework etc.)
  • Familiarity with inference optimization principles — latency, throughput, and cost management — to support scalable and cost-effective LLM deployment
  • Familiarity with AI security considerations relevant to LLM systems, including prompt injection risks, adversarial input handling, and audit trail requirements
  • Familiarity with responsible AI principles including bias and fairness evaluation, human-in-the-loop design, and explainability approaches in the financial services contexts
  • Familiarity with data pipeline design for AI workloads including ingestion, transformation, and quality validation
  • Familiarity with cloud-based platforms for building, training, and deploying scalable LLM solutions (e.g. Azure ML, AWS SageMaker, Google Vertex AI etc.)
  • Familiarity with AI-assisted software engineering tools for accelerating development, implementation, and code review practices (e.g. Claude Code, GitHub Copilot, Codex etc.)
  • Master’s degree in Business Administration (MBA) or Science (MS) preferred
  • Prior consulting experience

Responsibilities

  • Design, develop, test, deploy, and support production-grade AI/ML, generative AI, and intelligent automation solutions.
  • Solve complex technical problems through coding, debugging, testing, troubleshooting, and structured design remediation.
  • Translate business and user requirements into technical designs, APIs, workflows, and supportable implementation patterns.
  • Build and integrate LLM, RAG, and agentic solution components into enterprise applications and platforms.
  • Contribute to system design across service boundaries, orchestration layers, data flows, security controls, and external integrations.
  • Support project delivery through disciplined execution, estimation, documentation, status communication, and risk identification.
  • Partner with Development, Engineering, Product, Data, Architecture, and project leadership teams to deliver high-value AI capabilities.
  • Improve performance, resilience, maintainability, and cost efficiency of deployed AI systems.
  • Participate in architecture and design reviews, providing thoughtful trade-off analysis and implementation input.
  • Use modern AI-assisted software engineering tools such as Claude Code, Codex, or equivalent agentic coding platforms as part of day-to-day engineering delivery.

Benefits

  • medical and dental coverage
  • pension and 401(k) plans
  • a wide range of paid time off options
  • flexible vacation policy
  • designated EY Paid Holidays
  • Winter/Summer breaks
  • Personal/Family Care
  • other leaves of absence when needed to support your physical, financial, and emotional well-being
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service