Senior MDM Data Engineer

CeteraDallas, TX
2h$111,000 - $148,000Hybrid

About The Position

We are at the forefront of transforming the future of technology in the financial industry, and we seek curious, practical individuals to help us pave the way. Our team is not intimidated by taking calculated risks, as they relish a good challenge and are eager to engage in problem-solving. As a member of our team, you will work alongside like-minded experts in a culture that is deeply rooted in innovation and progression. Join us to be part of a transformative journey that can shape the industry's future. The Senior MDM Data Engineer is a hands-on technical expert responsible for designing, building, and operating the data engineering pipelines that power the Master Data Management ecosystem across the enterprise. This role focuses on integrating Profisee with upstream source systems and downstream consuming systems, transforming source data into MDM-ready structures, and enabling near–real-time propagation of golden-record updates across the enterprise. The engineer will work across Profisee, Snowflake, AWS (Lambda, S3, Step Functions, MSK/Kinesis), and the broader PCB ingestion → mastering → integrated → access pipeline. This is a key contributor role in establishing enterprise-wide reference data, mastering domains (Party, Account, Advisor/Rep, Security, etc.), reinforcing governance rules, and building scalable engineering patterns for survivorship, deduplication, and attribute propagation. This opportunity provided a Hybrid work schedule ideally from our Dallas, Texas office.

Requirements

  • 5–10+ years in data engineering, with at least 3+ years supporting MDM platforms (preferably Profisee).
  • Bachelor’s degree in information systems, Data Management, Computer Science, Business Analytics, or equivalent.
  • 4–7 years in MDM, Data Quality, or Data Integration roles.
  • Hands-on experience with Profisee MDM required.
  • Experience delivering MDM solutions in AWS cloud environments is strongly preferred.
  • Strong programming experience in Python, SQL, and data transformation frameworks (dbt, Spark, Glue).
  • Experience working with AWS data services including : S3, Glue, Lambda, Step Functions, DynamoDB, and API Gateway.
  • Hands-on experience with event-driven mastering using: Kafka/MSK (preferred), Kinesis, or Event Bridge.
  • Familiarity with AWS security models, IAM, encryption, and operational best practices. Financial Services
  • Proficiency designing event-driven integrations, especially for operational data mastering.
  • Advanced Snowflake experience (Streams/Tasks, Iceberg tables, RBAC).
  • Strong understanding of survivorship, matching algorithms, identity resolution, record lineage, and MDM best practices.
  • Experience building CI/CD pipelines (GitHub Actions, CodePipeline, Bitbucket, etc.).
  • Familiarity with enterprise MDM domains: Party, Account, Security, Advisor, Transaction context, Hierarchy/Relationships.
  • Proven experience defining DQ rules, profiling logic, and exception workflows.
  • Understanding of metadata management and stewardship processes
  • Experience designing and managing support and monitoring frameworks for 24x7 operational data systems.
  • Proven ability to establish incident management processes and escalation protocols.
  • Strong background in optimizing ODS performance, availability, and reliability at scale.
  • Strong communication skills and ability to work across business and IT stakeholders.
  • Comfortable analyzing complex source data and resolving data quality/root cause issues.
  • Ability to work in a fast-paced, federated data organization and navigate multi-domain alignment.
  • Demonstrated ownership and a proactive, engineering-first mindset.

Nice To Haves

  • Cloud certifications (AWS Solutions Architect, AWS Data Analytics) preferred.
  • MDM certifications are desirable.
  • Financial Services Certifications preferred such as SIE Certification, Series-99 or similar.
  • Technical Certifications preferred related to AWS Technology Stack
  • Background in financial services, broker-dealer, or other regulated, data-intensive industries is preferred.
  • Experience implementing streaming/event-driven data patterns is highly desirable.
  • Experience integrating Profisee with Snowflake and AWS-native pipelines.
  • Knowledge of streaming-based MDM updates and near-real-time mastering patterns.
  • Exposure to canonical modeling in event-driven architectures.
  • Familiarity with reconciling mastered data with downstream operational systems.
  • Understanding of domain-driven design for large-scale data ecosystems.

Responsibilities

  • MDM Data Engineering & Pipeline Development: Design, build, and maintain ingestion, standardization, match/merge preparation, and golden-record distribution pipelines that support Profisee and Snowflake MDM workflows.
  • Develop and optimize batch, micro-batch, and real-time/streaming pipelines using AWS services (MSK/ Kafka, Lambda, Step Functions, S3, Glue) that support near real-time golden-record updates.
  • Build robust pipelines to feed the feedback loop from Profisee golden records to downstream systems (e.g., operational apps, Snowflake EDW, integration layer, domain APIs).
  • Transform source-system attributes into canonical MDM domain-aligned schemas using Iceberg, or engineered transformations.
  • Profisee Integration & Master Data Operations: Implement and maintain Profisee ingestion patterns, including entity configuration, match/merge logic, survivorship rule execution, and data validation pipelines.
  • Build automated flows to promote golden records from Profisee into Snowflake and publish attribute changes to downstream platforms.
  • Support MDM model changes by developing scalable pipelines that adapt to schema evolution and new mastered attributes.
  • Partner with MDM Analysts and Data Stewards to translate governance rules into executable engineering logic.
  • Architecture, Standards & Data Quality: Implement data quality rule enforcement (completeness, conformity, integrity, consistency) as part of MDM pipelines using tools such as Great Expectations/Soda/Deequ or native SQL rules.
  • Contribute to MDM metadata management, lineage capture, and integration with enterprise catalogs.
  • Apply best practices for referential integrity, survivorship, deduplication, hierarchy management, and domain alignment.
  • Establish reusable patterns for onboarding new MDM sources or domains.
  • Collaboration & Delivery: Partner with ingestion, integrated, and access layer engineering teams to ensure MDM-produced golden data is correctly consumed and traceable end-to-end.
  • Work with Security, IAM, and DevOps to ensure secure, scalable delivery of MDM workloads.
  • Provide technical guidance to junior engineers and serve as a subject-matter SME for MDM pipeline development.
  • Contribute to Agile ceremonies, sprint planning, backlog refinement, and peer design reviews.
  • Operational Support & Continuous Improvement Ensure MDM jobs, both batch and streaming meet SLAs.
  • Support MDM incident triage, match conflicts, rule tuning, and schema evolution.
  • Contribute to playbooks, runbooks, onboarding guides, and stewardship documentation.
  • Evaluate Profisee feature releases, AWS enhancements, and architectural maturity opportunities
  • Operational Excellence: Monitor and continuously improve data quality and operational efficiency.
  • Monitor and participate in the optimization of performance of the MDM Platform and related systems to meet evolving business demands.
  • Stay current with emerging technologies and industry trends to drive innovation and best practices.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service