AVP, Data Platform & Engineering

Lincoln FinancialRadnor, PA
Hybrid

About The Position

Lincoln Financial is investing heavily in modern data and AI capabilities to drive business transformation at enterprise scale. As AVP, Data Platform & Engineering, you will serve as the senior technical leader responsible for architecting, building, and operating Lincoln’s modern data infrastructure—including the cloud data lakehouse, batch and streaming pipelines, feature store, and data platform services that power every AI and analytics use case across the firm. This role sits within the AI, Data & Analytics organization and reports to the Chief Data & AI Engineering Officer. You will work in close daily partnership with Data Strategy & Governance (which defines logical models, standards, and requirements), AI/ML & Agentic Engineering (which consumes data for model training and inference), and the CIO organization (to ensure enterprise integration, security, and compliance). This is a senior, hands on leadership role combining deep technical ownership with organizational scale, executive partnership, and measurable business impact.What you'll be doingData Platform Architecture & Strategy •    Own end to end architecture for the data platform, including cloud lakehouse, pipelines, streaming, and feature store infrastructure optimized for AI•    Define multi year platform roadmaps and lead strategic build vs buy decisions•    Manage senior vendor relationships across Databricks, Snowflake, cloud providers, and orchestration platforms•    Establish engineering standards (CI/CD, data contracts, observability, data mesh patterns)•    Partner with the CIO organization to ensure alignment with enterprise systems, security, and IT governance•    Collaborate with Data Strategy & Governance leaders to implement logical architectures and canonical standardsData Engineering & Pipeline Execution •    Build and operate scalable ELT/ETL pipelines integrating data from 50+ legacy and modern systems•    Implement data contracts, validation, monitoring, and lineage for AI critical data assets•    Deploy and scale real time streaming architectures (Kafka/Kinesis) for low latency AI inference•    Modernize legacy ETL tooling and migrate to cloud native platforms•    Optimize performance, reliability, and cost across data storage and compute environmentsFeature Store & AI Data Enablement •    Implement and run the enterprise feature store platform, translating logical requirements into production infrastructure•    Deliver standardized data products and semantic layers enabling self service for AI teams and analytics users•    Build pipelines supporting AI training data, testing datasets, and production inference•    Implement AI specific data quality monitoring aligned to governance standards•    Enable AI teams to develop, test, and deploy models independently and at scalePlatform Operations & Engineering Excellence •    Own platform reliability, deployment, monitoring, and operational excellence for all AI supporting data infrastructure•    Build and lead a high performing team of data engineers and platform engineers•    Implement comprehensive observability (pipeline health, freshness, quality, cost, uptime)•    Ensure compliance with financial services regulatory requirements in partnership with InfoSec, Legal, and Compliance•    Establish reusable frameworks, tooling, and automation to maximize engineering velocity and developer productivity

Requirements

  • 12+ years of software, data, or platform engineering experience, with 5+ years in senior leadership roles
  • Deep hands on expertise with Databricks and/or Snowflake (non negotiable) and strong experience with AWS, Azure, or GCP
  • Proven experience designing and operating enterprise data lakehouses, data meshes, streaming platforms, and modern data stacks
  • Track record migrating from legacy data warehouses and ETL tools to cloud native platforms
  • Experience operating in highly regulated environments with strong security, audit, and compliance requirements
  • Strong executive presence with the ability to partner effectively with CIO organizations
  • Bachelor’s degree required; Master’s degree strongly preferred

Nice To Haves

  • Insurance or financial services domain expertise
  • Real time ML systems and low latency feature serving
  • Vector databases and semantic search platforms
  • Kubernetes, containerization, and infrastructure as code
  • Data observability, catalog, and lineage tooling

Responsibilities

  • Own end to end architecture for the data platform, including cloud lakehouse, pipelines, streaming, and feature store infrastructure optimized for AI
  • Define multi year platform roadmaps and lead strategic build vs buy decisions
  • Manage senior vendor relationships across Databricks, Snowflake, cloud providers, and orchestration platforms
  • Establish engineering standards (CI/CD, data contracts, observability, data mesh patterns)
  • Partner with the CIO organization to ensure alignment with enterprise systems, security, and IT governance
  • Collaborate with Data Strategy & Governance leaders to implement logical architectures and canonical standards
  • Build and operate scalable ELT/ETL pipelines integrating data from 50+ legacy and modern systems
  • Implement data contracts, validation, monitoring, and lineage for AI critical data assets
  • Deploy and scale real time streaming architectures (Kafka/Kinesis) for low latency AI inference
  • Modernize legacy ETL tooling and migrate to cloud native platforms
  • Optimize performance, reliability, and cost across data storage and compute environments
  • Implement and run the enterprise feature store platform, translating logical requirements into production infrastructure
  • Deliver standardized data products and semantic layers enabling self service for AI teams and analytics users
  • Build pipelines supporting AI training data, testing datasets, and production inference
  • Implement AI specific data quality monitoring aligned to governance standards
  • Enable AI teams to develop, test, and deploy models independently and at scale
  • Own platform reliability, deployment, monitoring, and operational excellence for all AI supporting data infrastructure
  • Build and lead a high performing team of data engineers and platform engineers
  • Implement comprehensive observability (pipeline health, freshness, quality, cost, uptime)
  • Ensure compliance with financial services regulatory requirements in partnership with InfoSec, Legal, and Compliance
  • Establish reusable frameworks, tooling, and automation to maximize engineering velocity and developer productivity

Benefits

  • Clearly defined career tracks and job levels, along with associated behaviors for each of Lincoln's core values and leadership attributes
  • Leadership development and virtual training opportunities
  • PTO/parental leave
  • Competitive 401K and employee benefits
  • Free financial counseling, health coaching and employee assistance program
  • Tuition assistance program
  • Work arrangements that work for you
  • Effective productivity/technology tools and training
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service