Kestra-posted 9 days ago
Full-time • Mid Level
Austin, TX
1,001-5,000 employees

Kestra Holdings offers industry-leading wealth management platforms for independent wealth management professionals nationwide. Kestra is dedicated to empowering independent financial professionals—including traditional and hybrid RIAs—to grow their businesses and deliver exceptional client service. We combine advanced business management technology with personalized consulting to provide unmatched scale, efficiency, and support. Our advisor-focused culture is built on innovation and advocacy, enabling advisors to offer comprehensive securities and investment advisory solutions to their clients. Lead with Purpose. Partner with Impact. We are seeking a seasoned Databricks Data Engineer with expertise in Azure cloud services and the Databricks Lakehouse platform. The role involves designing and optimizing large-scale data pipelines, modernizing cloud-based data ecosystems, and enabling secure, governed data solutions. Strong skills in SQL, Python, PySpark, ETL/ELT frameworks, and experience with Delta Lake, Unity Catalog, and CI/CD automation are essential.

  • Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform, ensuring reliability, scalability, and governance.
  • Modernize the Azure-based data ecosystem, contributing to cloud architecture, distributed data engineering, data modeling, security, and CI/CD automation.
  • Utilize Apache Airflow and similar tools for orchestration and workflow automation.
  • Work with financial or regulated datasets, applying strong compliance and governance practices.
  • Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks notebooks.
  • Design and optimize Delta Lake data models for reliability, performance, and scalability.
  • Implement and manage Unity Catalog for RBAC, lineage, governance, and secure data sharing.
  • Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables.
  • Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems.
  • Automate API ingestion and workflows using Python and REST APIs.
  • Support data governance, lineage, cataloging, and metadata initiatives.
  • Enable downstream consumption for BI, data science, and application workloads.
  • Write optimized SQL/T-SQL queries, stored procedures, and curated datasets for reporting.
  • Automate deployments, DevOps workflows, testing pipelines, and workspace configuration.
  • 8+ years of experience designing and developing scalable data pipelines in modern data warehousing environments, with full ownership of end-to-end delivery.
  • Expertise in data engineering and data warehousing, consistently delivering enterprise-grade solutions.
  • Proven ability to lead and coordinate data initiatives across cross-functional and matrixed organizations.
  • Advanced proficiency in SQL, Python, and ETL/ELT frameworks, including performance tuning and optimization.
  • Hands-on experience with Azure, Snowflake, and Databricks, and integration with enterprise systems.
  • Competitive pay and benefits with a large employer (over 1600 employees nationwide)
  • 401(k), health insurance, and a competitive benefits package
  • Work in a supportive, collaborative environment committed to professional excellence
  • Help clients navigate meaningful financial decisions with confidence
  • Opportunities for training, development, and long-term growth within the firm
  • Tuition reimbursement for qualified expenses
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service