Senior Vice President, Full Stack Data Engineer

BNY MellonNew York, NY
$116,500 - $220,000Onsite

About The Position

At BNY, our culture allows us to run our company better and enables employees’ growth and success. As a leading global financial services company at the heart of the global financial system, we influence nearly 20% of the world’s investible assets. Every day, our teams harness cutting-edge AI and breakthrough technologies to collaborate with clients, driving transformative solutions that redefine industries and uplift communities worldwide. Recognized as a top destination for innovators and champions of inclusion, BNY is where bold ideas meet advanced technology and exceptional talent. Together, we power the future of finance – and this is what #LifeAtBNY is all about. Join us and be part of something extraordinary. We’re seeking a future team member for the role of Senior Vice President, Full-Stack Data Engineer to join our Engineering Hub Analytics team. This role is located in New York, NY. This is a hands-on senior individual contributor role embedded within the Engineering Hub Analytics practice. The Data Engineer owns data platform delivery across client engagements — designing, building, and hardening production-grade data pipelines, warehouse architectures, and data infrastructure that power AI and analytics capabilities. This is not a consulting or coordination role; it is an engineering role with full delivery ownership.

Requirements

  • Bachelor's degree in computer science or a related discipline, or equivalent work experience required; advanced degree is beneficial
  • 15+ years of diverse experience in multiple areas of information technology required; experience in the securities or financial services industry is a plus.
  • Mentors junior data engineers within engagements; contributes to team delivery quality, pipeline standards, and knowledge sharing.
  • Deep experience designing and operating production ELT/ETL pipelines, data warehouse/lakehouse architectures, and cloud data infrastructure.
  • Hands-on experience with modern data tooling: dbt, Airflow or Prefect, Spark, Snowflake or Databricks or BigQuery, and cloud-native data services (AWS, Azure, or GCP).
  • Experience working across the full data stack — ingestion, transformation, serving, governance, and quality — rather than only within a single layer.
  • Experience delivering data infrastructure that feeds AI/ML systems, including feature engineering pipelines, vector stores, RAG knowledge pipelines, or LLM context preparation workflows.
  • Experience operating in regulated environments (financial services, healthcare) with data governance, lineage, and compliance requirements.
  • Strong data modeling judgment: dimensional modeling, data vault, OBT patterns — knowing when to apply which and why.
  • Comfort operating in ambiguity and driving data discovery with senior stakeholders and data owners.
  • Experience with metadata management and governance platforms (Collibra, DataHub, OpenMetadata).
  • Familiarity with real-time and streaming data patterns (Kafka, Kinesis, Flink) as a complement to batch workloads.
  • Experience balancing pipeline velocity with data quality, observability, and SLA commitments.
  • Strong Java\Python engineering skills for pipeline development; SQL fluency (T-SQL, PL/SQL, or equivalent) for transformation and analysis.
  • Experience with dbt for transformation layer development and testing.
  • Proficiency with orchestration tooling: Airflow, Prefect, or equivalent.
  • Cloud data platform experience: Snowflake, Databricks, BigQuery, or Redshift in production.
  • Familiarity with cloud infrastructure relevant to data workloads: AWS (Glue, Lambda, Step Functions, S3, Redshift), Azure (Data Factory, Synapse, ADLS), or GCP (Dataflow, BigQuery, Cloud Composer).
  • Data quality and observability tooling: Great Expectations, Monte Carlo, dbt tests, or equivalent.
  • Version control, CI/CD, and DevOps practices applied to data pipeline development (DataOps).
  • Strong written and verbal communication across technical and non-technical audiences, including data owners, analytics consumers, and platform stakeholders.
  • Clear data product and delivery judgment within a scoped engagement.
  • Ability to coordinate and execute across stakeholders — data owners, platform engineers, analytics teams — without formal authority.
  • Practical tradeoff thinking: pipeline complexity vs. maintainability, freshness vs. cost, schema flexibility vs. governance.
  • Bias toward action with disciplined follow-through on data quality and operational readiness.

Responsibilities

  • Design, build, and harden production data pipelines, ELT/ETL workflows, and data platform components across client engagements — moving confidently from prototype to scalable, observable production deployment.
  • Embed with business and platform stakeholders to scope and execute time-boxed data engineering engagements with clear entry and exit criteria; translate defined data opportunities into production-ready delivery plans.
  • Architect and implement data infrastructure across ingestion, transformation, serving, and governance layers using modern tooling (dbt, Airflow/Prefect, Spark, Snowflake, Databricks, cloud-native services).
  • Build and integrate data pipelines that feed AI and analytics systems — including feature stores, RAG knowledge bases, semantic search indexes, and LLM context pipelines.
  • Default to reuse-first delivery: extend existing data platform patterns, templates, and pipeline modules rather than building avoidable one-offs; contribute reusable data assets back to shared repositories.
  • Apply data quality, observability, and operational readiness practices consistently — including lineage tracking, schema validation, SLA monitoring, and alerting.
  • Execute discovery with data owners, analytics teams, and sponsors to clarify data contracts, validate feasibility, and rapidly prototype before hardening into production.
  • Prepare clear handoff packages and transition plans — including data dictionaries, lineage documentation, pipeline runbooks, and ownership transfer artifacts — so receiving teams can sustain solutions independently.
  • Surface reusable data patterns and learnings from engagements that can be standardized and promoted into shared platform capabilities.
  • Coordinate with architecture, security, compliance, and governance stakeholders to ensure data solutions are production-appropriate, lineage-traceable, and governance-compliant.
  • Mentor junior data engineers; contribute to team delivery quality, standards, and knowledge sharing.

Benefits

  • highly competitive compensation
  • benefits
  • wellbeing programs
  • generous paid leaves
  • paid volunteer time
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service