Lead with Purpose. Partner with Impact. At Kestra Holdings, we don’t just support financial advisors—we help them thrive. Our service model is human-led, tech-enabled, and purpose-driven, empowering advisors to deliver exceptional outcomes for their clients.We are seeking a highly skilled Databricks Data Engineer with deep expertise in Data engineering, Azure cloud services, and Databricks Lakehouse technologies. What You’ll Do Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform, ensuring reliability, scalability, and governance Modernize the Azure-based data ecosystem, contributing to cloud architecture, distributed data engineering, data modeling, security, and CI/CD automation Utilize Apache Airflow and similar tools for orchestration and workflow automation Work with financial or regulated datasets, applying strong compliance and governance practices Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks notebooks Design and optimize Delta Lake data models for reliability, performance, and scalability Implement and manage Unity Catalog for RBAC, lineage, governance, and secure data sharing Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and MDM systems Automate API ingestion and workflows using Python and REST APIs Support data governance, lineage, cataloging, and metadata initiatives Enable downstream consumption for BI, data science, and application workloads Write optimized SQL/T-SQL queries, stored procedures, and curated datasets for reporting Automate deployments, DevOps workflows, testing pipelines, and workspace configuration What You Bring 8+ years of experience designing and developing scalable data pipelines in modern data warehousing environments, with full ownership of end-to-end delivery. Expertise in data engineering and data warehousing, consistently delivering enterprise-grade solutions. Proven ability to lead and coordinate data initiatives across cross-functional and matrixed organizations. Advanced proficiency in SQL, Python, and ETL/ELT frameworks, including performance tuning and optimization. Hands-on experience with Azure, Snowflake, and Databricks, and integration with enterprise systems. To perform this job successfully, individual must be able to perform each essential duty satisfactorily:
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed
Number of Employees
1,001-5,000 employees