Senior AWS Data Architect

BOK FinancialAddison, TX
Onsite

About The Position

The AWS Data Architect is responsible for the overall strategy, design, and implementation of the company's data infrastructure on Amazon Web Services (AWS). This role requires a strategic mindset focused on translating business outcomes into technical architecture, enabling automation, and defining data integration patterns for various use cases. The architect will lead the transition from a current state to a target state data ecosystem, proposing and executing viable options based on both short- and long-term business needs. Additionally, the role will involve incorporating advanced AI architecture, including agentic AI, vector databases, Model Context Protocol (MCP), and Retrieval-Augmented Generation (RAG). Collaboration is the key to success with this fast-paced team. While each person holds an area of expertise, we all join in to support the customer. Through weekly meetings, group huddles, and 1 on 1 peer training, everyone is given the opportunity to brainstorm, ask questions, and find solutions. We support and lift one another up to achieve more together.

Requirements

  • Bachelor’s Degree in information systems, data analytics or a related field
  • 15+ years’ experience in a data discipline such as developing and implementing data framework, reference models, or data modeling (operational and analytical).
  • Certification in TOGAF (or similar framework required)

Responsibilities

  • Document current and target state data architectures, including interim states and migration roadmaps using AWS and other tools.
  • Present strategic execution options that weigh business impact, cost, risk, and timelines.
  • Align data initiatives with business goals to deliver measurable outcomes like improved customer experience and faster time-to-market.
  • Create and maintain robust reference architectures for batch, near-real-time, and real-time data flows from diverse types of sources.
  • Architect and implement scalable, low-latency data pipelines using Apache Kafka.
  • Integrate advanced AI architectures such as vector databases, MCP, and RAG to enhance data processing and retrieval.
  • Promote automation, CI/CD practices, and establish patterns, reusable templates, enabling rapid, consistent development for engineering teams.
  • Drive data governance and data quality by implementing best practices throughout the data lifecycle.
  • Provide leadership and technical guidance while mentoring teams and promoting innovation.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service