Data Engineer

AssetMark Financial HoldingsCharlotte, NC
12d$126,000 - $140,000Hybrid

About The Position

The Data Engineer will be a key technical leader responsible for the architecture, development, and optimization of mission-critical data pipelines that power AssetMark's platform. This role requires an experienced individual who can ensure data reliability and scalability across our Azure and Snowflake data stack, while proactively integrating Generative AI (GenAI) techniques into our engineering workflows and data products. The ideal candidate will transform raw data into governed, high-quality assets, mentor junior team members, and drive the adoption of modern data observability practices.

Requirements

  • Cloud Expertise: Proven hands-on experience building and deploying data solutions on Microsoft Azure .
  • Data Warehouse: Deep expertise with Snowflake architecture, optimization, and advanced SQL features.
  • Programming: Strong proficiency in Python for data manipulation, scripting, and pipeline automation.
  • Modeling: Solid experience with data modeling techniques (Dimensional, 3NF, or Data Vault) and developing complex ETL/ELT workflows.
  • Data Ops: Experience with modern data transformation tools like dbt (Data Build Tool) and orchestration tools (e.g., Azure Data Factory, Airflow).
  • Experience: 3 - 7 years of professional experience in a Data Engineering, Software Engineering, or similar role.
  • Candidates must be legally authorized to work in the US to be considered.
  • We are unable to provide visa sponsorship for this position.

Nice To Haves

  • Experience working in the financial services or wealth management domain.
  • Prior exposure to Data Observability platforms (e.g., Monte Carlo, Collibra) .
  • Familiarity with Generative AI (GenAI) concepts or hands-on use of LLM coding assistants (e.g., Copilot) to improve engineering efficiency.
  • Experience with real-time or streaming data technologies (e.g., Kafka, Azure Event Hubs).
  • Proficiency with Infrastructure as Code (IaC) tools like Terraform.

Responsibilities

  • Data Architecture & Engineering Leadership (70%)
  • Platform Development: Design, build, and optimize highly scalable and fault-tolerant ELT/ETL pipelines using Python, SQL, and dbt to integrate complex financial datasets from diverse sources into Snowflake (hosted on Azure).
  • Cloud & Infrastructure: Own the data infrastructure on Azure , including leveraging services like Azure Data Factory and Azure Synapse, with expertise in setting up and managing data flows into Snowflake .
  • Data Modeling: Lead the design and implementation of dimensional, Kimball/ Inmon , and/or Data Vault models within the data warehouse to support advanced analytics and reporting.
  • Performance Optimization: Conduct performance tuning for complex SQL queries and data pipelines within Snowflake to ensure low latency and cost-efficient compute usage.
  • Code Quality & CI/CD: Champion software engineering best practices, including robust unit/integration testing, automated data validation, and maintaining resilient CI/CD pipelines (e.g., using Azure DevOps or GitHub Actions).
  • Data Quality, Observability, and Governance (20%)
  • Data Reliability: Implement advanced data quality frameworks and observability solutions (e.g., Monte Carlo) to automatically monitor data freshness, volume, distribution, and schema health, proactively preventing data downtime.
  • Data Lineage: Establish and maintain comprehensive data lineage documentation and tooling to provide transparency and ensure compliance across the data transformation layer.
  • Security & Compliance: Ensure all data assets and pipelines adhere to strict financial industry compliance, governance, and security standards ( RBAC, encryption, PII masking ).
  • Innovation & Mentorship (10%)
  • GenAI Integration: Proactively evaluate and pilot Generative AI techniques (e.g., leveraging LLMs via tools like GitHub Copilot or open-source frameworks) to accelerate internal development processes, generate boilerplate code, and enhance documentation.
  • Mentorship: Act as a technical mentor for junior data engineers, guiding them on best practices in Python, Snowflake, data modeling, and cloud architecture.
  • Stakeholder Collaboration: Partner with Data Scientists, Product Managers, and Business Analysts to translate high-level business requirements into precise, scalable technical solutions.

Benefits

  • Flex Time or Paid Time Off and Sick Time Off
  • 401K – 6% Employer Match
  • Medical, Dental, Vision – HDHP or PPO
  • HSA – Employer contribution (HDHP only)
  • Volunteer Time Off
  • Career Development / Recognition
  • Fitness Reimbursement
  • Hybrid Work Schedule

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

1,001-5,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service