AI Data Engineering Lead

TruistCharlotte, NC
Onsite

About The Position

The AI Data Engineering Lead designs, builds, and maintains the data pipelines, ingestion frameworks, transformation logic, and governed data services that power AI-enabled applications, agentic systems, analytics workflows, and enterprise reporting at The Forge. This is a hands-on engineering role focused on building reliable, scalable, and observable data infrastructure. The engineer works across ingestion, transformation, storage, retrieval, and delivery layers — ensuring that the right data reaches the right systems in a governed, auditable, and production-ready state. Daily work includes building and maintaining ETL/ELT pipelines, integrating enterprise data sources, implementing data quality and validation logic, supporting AI and agentic retrieval patterns, managing data contracts and schemas, and partnering with engineering, product, and analytics teams to deliver data that is clean, current, trustworthy, and useful.

Requirements

  • Bachelor's degree with minimum ten years of prior relevant experience in IT field, including Cybersecurity
  • Direct experience with financial services institutions as well as demonstrable experience in the protection vertical relevant data types (PII, PHI) and legal requirements (HIPAA, etc.)
  • Ability to evaluate the cyber risk of technical solutions through the analysis of architectural documents.
  • Ability to relate business requirements and risks to technical controls, systems and processes
  • Highly adaptable to a constantly changing business and technology environment
  • Strategic thinker with ‘big picture’ perspective and a broad understanding of information security, risk management, and their direct applications to business process
  • Excellent leadership skills with the ability to leverage cross-functional teams to meet defined objectives
  • Outstanding executive presentation and communication skills
  • Exhibit extraordinary thought leadership, influencing and problem resolution skills

Nice To Haves

  • 3+ years of data engineering experience building and supporting production data pipelines, ETL/ELT workflows, and enterprise data services.
  • Strong programming ability in Python and SQL, with working knowledge of modern data transformation and orchestration frameworks.
  • Experience integrating structured and unstructured data sources including relational databases, APIs, cloud storage, and event streaming platforms.
  • Experience implementing data quality, validation, schema management, and lineage tracking for production data systems.
  • Familiarity with cloud-native data platforms, managed data services, and modern data warehouse or lakehouse architectures.
  • Experience with CI/CD-aligned data delivery, version control, and engineering best practices for data infrastructure.
  • Ability to work across data producers and consumers including AI systems, analytics platforms, and business applications.
  • Strong communication skills and ability to work effectively in cross-functional enterprise delivery teams.
  • Experience building data infrastructure that supports AI/ML pipelines, vector retrieval, RAG patterns, or agent memory systems.
  • Experience with Microsoft Fabric, Azure Data Factory, Azure Synapse, Azure AI Search, or comparable enterprise data platforms.
  • Experience with streaming data patterns using event-driven architectures or platforms such as Kafka, Event Hubs, or equivalent.
  • Experience in financial services, cybersecurity, or other regulated enterprise environments with strong data governance and audit requirements.
  • Familiarity with data mesh, data contract, or federated data ownership patterns in large enterprise organizations.
  • Experience with observability tooling, pipeline monitoring, and data quality frameworks for production data systems.

Responsibilities

  • Design, build, and maintain data pipelines, ingestion workflows, and transformation logic that deliver clean, governed, and reliable data to AI systems, analytics tools, and enterprise consumers.
  • Integrate enterprise data sources including structured databases, APIs, event streams, file systems, and cloud data services into the Forge data ecosystem using approved patterns.
  • Implement data quality checks, validation logic, schema enforcement, and lineage tracking to ensure data entering AI and analytics systems is accurate, complete, and auditable.
  • Build and maintain data models, transformation layers, and serving structures that support AI grounding, retrieval-augmented generation (RAG), agent memory, vector indexing, and analytics delivery.
  • Support the design and implementation of data contracts between upstream producers and downstream consumers including AI agents, applications, dashboards, and reporting tools.
  • Optimize pipeline performance, reliability, and cost across batch, streaming, and event-driven data movement patterns.
  • Instrument data pipelines with observability, alerting, and monitoring so that data failures, quality degradations, and schema drift are detected and resolved quickly.
  • Partner with agentic engineering, application, platform, security, and QA teams to ensure data is delivered in formats, cadences, and access patterns that support production AI workflows.
  • Maintain documentation for pipelines, data models, integration specifications, data dictionaries, and operational runbooks.
  • Continuously improve data engineering practices, tooling, and automation as Forge AI capabilities and data volume scale.

Benefits

  • medical
  • dental
  • vision
  • life insurance
  • disability
  • accidental death and dismemberment
  • tax-preferred savings accounts
  • 401k plan
  • vacation
  • sick days
  • paid holidays
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service