Design and implement scalable ETL pipelines on Databricks (PySpark, SQL, Delta Lake) to process credit card transactions, balances, and payments. Develop the core calculation engines and integrate with upstream/downstream systems. Optimize Spark jobs for large-scale financial datasets (billions of records, partitioning, caching, AQE). Ensure data quality and reconciliation across raw, curated, and output layers. Implement parameterized rules (APR, compounding frequency, grace period logic). Collaborate with business analysts to translate product rules into technical implementations. Apply unit tests, CI/CD pipelines, and monitoring for production-grade pipelines. Ensure compliance with financial data governance, lineage, and audit requirements.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed
Number of Employees
5,001-10,000 employees