Data Engineer

AssetMarkCharlotte, NC
Hybrid

About The Position

AssetMark is a leading strategic provider of innovative investment and consulting solutions serving independent financial advisors. We provide investment, relationship, and practice management solutions that advisors use in helping clients achieve wealth, independence, and purpose. The Job/What You'll Do: The Data Engineer (Analytics Engineering) will be a key technical leader responsible for the architecture, development, and optimization of mission-critical data pipelines that power AssetMark's platform. This role requires an experienced individual who is comfortable representing the analytic engineering function independently with business stakeholders and can ensure data reliability and scalability across our Azure and Snowflake data stack, while proactively integrating Generative AI (GenAI) techniques into our engineering workflows and data products. The ideal candidate will transform raw data into governed, high-quality assets, mentor junior team members, and drive the adoption of modern data observability practices. We can only consider candidates for this position who are able to accommodate a hybrid work schedule and are close to our Charlotte, NC office.

Requirements

  • 3–7 years of professional experience in a Data Engineering, Analytics Engineering, or similar role, with demonstrable progression in scope and complexity.
  • Expertise with Snowflake architecture, optimization, and advanced SQL features.
  • Strong proficiency in SQL/dbt for data transformation; experience with Python a plus
  • Solid experience with data modeling techniques, including Dimensional Modeling and developing complex ETL/ELT workflows.
  • Thrives at the intersection of data engineering and analytics - learning business domains, earning trust with business users and data stewards, and driving clarity from ambiguity to lead well-defined data solutions from requirement to delivery.
  • Proven hands-on experience building and deploying data solutions in a Cloud environment (Azure preferred)
  • Experience with modern data transformation tools like dbt and orchestration tools (e.g., Azure Data Factory, Airflow).
  • Candidates must be legally authorized to work in the US to be considered.

Nice To Haves

  • Experience working in the financial services or wealth management domain.
  • Prior exposure to Data Observability platforms (e.g., Monte Carlo, Atlan).
  • Familiarity with Generative AI (GenAI) concepts or hands-on use of LLM coding assistants (e.g., Copilot) to improve engineering efficiency.
  • Experience with real-time or streaming data technologies (e.g., Kafka, Azure Event Hubs).
  • Exposure to Infrastructure as Code (IaC) tools like Terraform.

Responsibilities

  • Design, build, and optimize highly scalable and fault-tolerant ELT/ETL pipelines using dbt and SQL to transform complex datasets in Snowflake and deliver enterprise scale data assets for analytic/user consumption.
  • Own the data infrastructure on Azure, including leveraging services like Azure Data Factory and Azure Synapse, with expertise in setting up and managing data flows into Snowflake.
  • Lead the design and implementation of dimensional, Kimball/Inmon models within the data warehouse to support advanced analytics and reporting.
  • Conduct performance tuning for complex SQL queries and data pipelines within Snowflake to ensure low latency and cost-efficient compute usage.
  • Champion software engineering best practices, including robust unit/integration testing, automated data validation, and maintaining resilient CI/CD pipelines (e.g., using Azure DevOps or GitHub Actions).
  • Define and enforce data quality standards across the transformation layer, including designing dbt and other automated tests to validate data accuracy, completeness, and consistency at each data architecture layer.
  • Implement advanced data quality frameworks and observability solutions (e.g., Monte Carlo) to automatically monitor data freshness, volume, distribution, and schema health, proactively preventing data downtime.
  • Establish and maintain comprehensive data lineage (e.g. Atlan) documentation and tooling to provide transparency and ensure compliance across the data transformation layer.
  • Ensure all data assets and pipelines adhere to strict financial industry compliance, governance, and security standards (RBAC, encryption, PII masking).
  • Proactively evaluate and pilot Generative AI techniques (e.g., leveraging LLMs via tools like GitHub Copilot or open-source frameworks) to accelerate internal development processes, generate boilerplate code, and enhance documentation.
  • Act as a technical mentor for junior data engineers, guiding them on best practices in SQL, dbt, Snowflake, data modeling, and cloud architecture.
  • Serve as a trusted technical partner to users and data stewards - proactively learning domain context, eliciting and clarifying business rules, and applying critical thinking to evaluate tradeoffs and translate consumption needs into well-designed and scalable data assets.

Benefits

  • Flex Time or Paid Time Off and Sick Time Off
  • 401K – 6% Employer Match
  • Medical, Dental, Vision – HDHP or PPO
  • HSA – Employer contribution (HDHP only)
  • Volunteer Time Off
  • Career Development / Recognition
  • Fitness Reimbursement
  • Hybrid Work Schedule
  • competitive benefits
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service