Sr. Data Engineer

Allworth FinancialDallas, TX
Hybrid

About The Position

Allworth Financial is an independent investment financial advisory firm specializing in retirement planning, investment advising, and 401(k) management. They are a high-growth, private equity-backed, multi-branch Registered Investment Advisor founded in 1993. Allworth is primarily a fee-based, employee-centric fiduciary advisory firm focused on client well-being and education. The business is multi-billion dollars and is experiencing continued growth. Allworth Financial was recognized with the "Circle of Excellence" award in 2021 and as a Barron's Top 40 RIA in 2024. This role is an opportunity to join a high-growth business within the Analytics & Insights team. The position will optimize the company’s existing data infrastructure to support a robust data analytics and reporting structure. The company is seeking a Senior Data Engineer, or similarly experienced data modernization professional, to guide the next stage of their data platform evolution. This role will support the modernization of legacy SQL-based processes into scalable, maintainable cloud data pipelines, identify architectural gaps, improve engineering standards, and mature data governance practices. The ideal candidate will have broad hands-on experience across data engineering, cloud data platforms, ETL/ELT design, data modeling, orchestration, performance tuning, data quality, and governance. This individual will be able to build and advise, write production-quality pipelines, review existing architecture, mentor team members, and identify risks, patterns, and opportunities. This person will play a key role in moving from legacy stored procedures and fragmented reporting processes toward a more reliable, transparent, and well-governed modern data architecture. This is a full-time Exempt position with a hybrid work schedule in the Addison office.

Requirements

  • 5+ years of professional experience in data engineering, analytics engineering, data architecture, or a closely related role.
  • Strong SQL skills, including stored procedures, joins, window functions, CTEs, merge logic, and performance tuning.
  • Hands-on experience with PySpark or distributed data processing frameworks.
  • Experience designing and maintaining ETL/ELT pipelines in a cloud environment.
  • Experience with data lake or lakehouse architecture, including Delta Lake or similar table formats.
  • Strong understanding of data modeling concepts, including dimensional modeling, staging layers, curated layers, and semantic/reporting models.
  • Experience with pipeline orchestration, scheduling, dependency management, and failure recovery.
  • Ability to troubleshoot data quality issues, pipeline failures, schema drift, performance bottlenecks, and source system inconsistencies.
  • Familiarity with modern data governance concepts, including cataloging, lineage, ownership, access control, data definitions, and data quality monitoring.
  • Ability to communicate clearly with both technical and non-technical stakeholders.
  • Experience reviewing existing systems and recommending practical, incremental improvements.
  • Experience with Azure Synapse Analytics, Azure Data Lake Storage, Microsoft Fabric, Azure Data Factory, or related Azure data services.

Responsibilities

  • Design, build, and optimize data pipelines using PySpark, Delta Lake, SQL, and cloud-based data engineering tools.
  • Improve data pipeline reliability, observability, logging, error handling, and restartability.
  • Review existing notebooks, SQL scripts, data models, and orchestration workflows for maintainability and performance.
  • Guide best practices for Azure Synapse, Spark, Delta Lake, data lake storage, and related cloud data services.
  • Identify architectural gaps, technical debt, and modernization risks.
  • Help design and implement data quality checks, reconciliation processes, and validation frameworks.
  • Support development of canonical IDs, master data patterns, and entity resolution processes.
  • Assist with data cataloging, lineage, metadata management, and governance practices.
  • Partner with business stakeholders to understand reporting, analytics, and data product requirements.
  • Help structure scalable data models for BI tools such as ThoughtSpot, Power BI, or similar platforms.
  • Mentor data team members and help raise the overall engineering maturity of the team.
  • Provide guidance on what we may be overlooking in areas such as security, performance, cost, governance, orchestration, testing, and long-term maintainability.

Benefits

  • Medical: Blue Shield (PPOs and HDHP with HSA) plans and Kaiser (HMO) plans for California associates
  • Dental insurance with MetLife
  • Vision insurance with VSP
  • Optional supplemental benefits
  • Healthcare savings accounts with company contribution
  • Flexible spending accounts
  • Flexible working arrangements
  • Generous 401K contributions
  • Flexible paid time off policy for Exempt associates
  • 15 days of paid time off annually for Non-Exempt associates during the first three years of employment
  • 11 Paid Holidays
  • Option to participate in our Equity Purchase Program (Contract, intern, and part-time employees are not eligible)
  • Future growth opportunities within the company
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service