Design and implement scalable ETL pipelines on Databricks (PySpark, SQL, Delta Lake) to process credit card transactions, balances, and payments. Develop the core calculation engines and integrate with upstream/downstream systems. Optimize Spark jobs for large-scale financial datasets (billions of records, partitioning, caching, AQE). Ensure data quality and reconciliation across raw, curated, and output layers. Implement parameterized rules (APR, compounding frequency, grace period logic). Collaborate with business analysts to translate product rules into technical implementations. Apply unit tests, CI/CD pipelines, and monitoring for production-grade pipelines. Ensure compliance with financial data governance, lineage, and audit requirements.