Asset Mark-posted about 1 month ago
$95,000 - $107,000/Yr
Full-time • Entry Level
Hybrid • Charlotte, NC
1,001-5,000 employees
Securities, Commodity Contracts, and Other Financial Investments and Related Activities

AssetMark is a leading strategic provider of innovative investment and consulting solutions serving independent financial advisors. We provide investment, relationship, and practice management solutions that advisors use in helping clients achieve wealth, independence, and purpose. The Associate Data Engineer will play a critical role in building and optimizing the data infrastructure that powers AssetMark's investment platform and advisor technology. This role is ideal for a highly motivated, analytically minded individual with foundational programming skills who is eager to launch a career in financial services data engineering. The focus will be on transforming raw data into high-quality, actionable insights, adhering to best practices for scale, reliability, and security. We can consider candidates for this position who are able to accommodate a hybrid work schedule and are close to our Charlotte, NC office.

  • Data Pipeline Development & Maintenance: Assist in the design, development, testing, and maintenance of scalable ELT/ETL data pipelines using Python and cloud-native services (Azure preferred) to integrate data from various internal and external systems (e.g., custodial feeds, Salesforce, APIs).
  • Data Quality & Governance: Implement and optimize data validation and cleansing routines within the pipelines to ensure data accuracy and integrity, directly supporting our commitment to fiduciary standards in wealth management.
  • Data Modeling & Warehousing: Collaborate with senior engineers and data architects on data modeling projects, helping to design and maintain schemas for our modern data platform (e.g., Snowflake, Databricks, or Cosmos DB).
  • Code & Automation: Write clean, efficient, and well-documented code primarily in Python for scripting, automation, and data transformation logic.
  • Troubleshooting & Support: Participate in the monitoring and troubleshooting of data pipeline issues, working to identify root causes and implement timely resolutions.
  • Documentation: Maintain accurate technical documentation for data flows, schemas, and processing logic to ensure team knowledge transfer and governance.
  • Programming Proficiency: Demonstrated proficiency and hands-on experience with Python for data manipulation and scripting.
  • SQL Mastery: Strong foundation in SQL and deep understanding of relational database concepts (e.g., joins, schemas, indexing).
  • Data Fundamentals: Basic understanding of data modeling principles (Star Schema, Normal Forms, etc.) and core ETL/ELT processes.
  • Communication: Excellent written and oral communication skills, with the ability to articulate technical issues clearly.
  • 0 to 3 years of professional or relevant internship/project experience in a data-focused role (e.g., Data Engineering, Software Engineering, or Data Analytics).
  • Bachelor's degree in Computer Science, Engineering, Data Science, Information Systems, or a closely related quantitative field.
  • Exposure to cloud platforms (Azure preferred, AWS, or GCP) and cloud-native data services.
  • Familiarity with distributed data processing frameworks like Apache Spark or Databricks.
  • Experience with workflow orchestration tools (e.g., Apache Airflow, Azure Data Factory).
  • Knowledge of financial services, brokerage, or wealth management data domains (e.g., understanding of positions, securities master, and custodial data).
  • Experience consuming and building RESTful APIs to move data.
  • Flex Time Off or Paid Time/Sick Time Off
  • 401K - 6% Employer Match
  • Medical, Dental, Vision - HDHP or PPO
  • HSA - Employer contribution (HDHP only)
  • Volunteer Time Off
  • Career Development / Recognition
  • Fitness Reimbursement
  • Hybrid Work Schedule
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service