About The Position

We are hiring a senior, hands-on Data Engineer to play a pivotal role in transforming Global Delivery operations through data-driven automation, generative AI, and agent-based systems. This is not a traditional data engineering role. This role sits at the intersection of data engineering, AI enablement, and business simplification, owning the end-to-end data foundation that powers intelligent workflows used directly in core operational processes. You will act as the data owner and steward for high-impact simplification initiatives, ensuring data quality, control, auditability, and fitness-for-purpose across analytics, automation, and AI decisioning. You will work closely with Product Owners, AI engineers, and platform teams to design and operate production-grade data pipelines and data products that enable scalable, secure, and observable AI-driven workflows across the asset servicing lifecycle.

Requirements

  • 5–10+ years of hands-on experience in data engineering, preferably in platform, infrastructure, or large-scale enterprise environments.
  • Strong engineering and systems mindset with experience building production-grade data pipelines.
  • Deep understanding of data lifecycle management, data quality, metadata, and controls in regulated environments.
  • Experience working closely with business stakeholders in complex operational domains (e.g., fund accounting, middle office, custody, payments, transfer agency).
  • Strong SQL skills for data validation and analysis.
  • Working knowledge of Python (or similar) for data processing, automation, or integration.
  • Solid understanding of: ETL / ELT patterns, APIs and file-based integrations (CSV, XML, vendor feeds), Data warehouses, data lakes, and analytical data models, Workflow orchestration and scheduling tools
  • Experience with data cataloging, data quality tools, and engineering documentation practices.
  • Experience supporting AI, ML, or generative AI systems through data engineering and governance.
  • Familiarity with concepts such as: Agent-based systems; Human-in-the-loop workflows; Model/prompt grounding and decision traceability
  • Ability to think critically about data risks, controls, and guardrails in AI-driven operational workflows.
  • Degree in Computer Science, Engineering, or equivalent practical experience in the financial services domain.
  • Passion for being hands-on, owning outcomes end-to-end, and building systems that materially change how work gets done.
  • Comfortable operating in a small, high-impact team with significant visibility and influence across the organization.

Responsibilities

  • Design, build, and operate scalable, resilient data pipelines supporting operational analytics, reporting, and AI-driven automation across cloud and on‑prem environments.
  • Model, store, and serve large-scale datasets optimized for both analytical workloads and low-latency consumption by AI and agent-based systems.
  • Integrate data from multiple internal and external sources, including vendor feeds, APIs, files, and enterprise platforms.
  • Ensure pipelines are observable, reliable, and production-ready with clear ownership and operational rigor.
  • Act as Data Steward for assigned business services within GD Simplification, accountable for: Data quality, consistency, lineage, and lifecycle management Business definitions, critical data elements (CDEs), and calculation logic Data dictionaries, business glossaries, and metadata
  • Define and enforce data standards, controls, and documentation aligned with governance and platform requirements.
  • Translate business control requirements into data-level and AI control mechanisms.
  • Enable AI and intelligent automation by ensuring high-quality, well-governed inputs for training, inference, and decisioning.
  • Define agent action constraints, data quality gates, and human‑in‑the‑loop triggers before automated actions are executed.
  • Ensure auditability and traceability through agent decision logs, data lineage, and versioning of rules, prompts, and models.
  • Establish data quality rules and exception taxonomies.
  • Monitor data quality dashboards, triage issues, and coordinate remediation across upstream and downstream teams.
  • Ensure data quality and control checks are embedded before AI-driven actions occur.
  • Align data architecture and integrations with broader ecosystem dependencies, cost considerations, and execution plans.
  • Partner closely with Product Owners to ensure data definitions and metrics align with business intent and measurable outcomes.
  • Collaborate across engineering, AI, platform, and business teams to identify and prioritize high-value simplification and automation use cases.
  • Communicate complex technical concepts clearly to non-technical stakeholders and help drive adoption of AI-enabled solutions across GD.
  • Champion modern data and engineering practices across organizational boundaries.

Benefits

  • Employees are eligible to participate in State Street’s comprehensive benefits program, which includes: our retirement savings plan (401K) with company match; insurance coverage including basic life, medical, dental, vision, long-term disability, and other optional additional coverages; paid-time off including vacation, sick leave, short term disability, and family care responsibilities; access to our Employee Assistance Program; incentive compensation including eligibility for annual performance-based awards (excluding certain sales roles subject to sales incentive plans); and, eligibility for certain tax advantaged savings plans.
  • For a full overview, visit https://hrportal.ehr.com/statestreet/Home.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service