Design, build, and operate Azure Lakehouse architectures using Azure Databricks, Azure Data Lake Storage (ADLS Gen2), Azure Synapse Analytics, and Azure Data Factory to support analytical and operational workloads. Process large-scale structured and unstructured datasets using optimized batch and streaming pipelines leveraging Apache Spark, Delta Lake, Python, SQL, and Scala. Design, develop, and maintain scalable ETL/ELT pipelines using Databricks Workflows, Spark jobs, and Delta Lake, ensuring reliability, performance, and data quality at enterprise scale. Implement real-time and batch data processing solutions and optimize pipelines for production use.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
5,001-10,000 employees