There are still lots of open positions. Let's find the one that's right for you.
We are seeking a Senior Data Engineer with a strong background in the Hadoop ecosystem and experience in scalable data processing using PySpark. The ideal candidate will have expertise in data warehousing and query-based analysis using Hive and Impala, as well as proficiency in Linux/Unix for scripting and operational troubleshooting. A solid understanding of distributed computing concepts, data partitioning, and performance tuning on Hadoop is essential. The candidate will be responsible for developing and maintaining large-scale data pipelines and ETL workflows that support our data analytics and reporting solutions, particularly in the context of Anti-Money Laundering (AML) processes.