A Pyspark and Databricks Developer with a good understanding of the entire ETL/Azure lifecycle with a background of data projects. Responsibilities Design, develop, and maintain scalable data pipelines and ETL processes using Azure Databricks, Data Factory, and other Azure services Implement and optimize Spark jobs, data transformations, and data processing workflows, Managing Databricks notebooks, Delta lake with Python, Delta Lake with Sparks SQL in Databricks Leverage Azure DevOps and CI/CD best practices to automate the deployment /DAB Deployments and management of data pipelines and infrastructure Ensure Data Integrity checks and Data Quality checks with zero percent errors when deployed to production Understand Databricks new features Unity Catalog/Lake flow/DAB Deployments/Catalog Federation Hands on experience Data extraction (extract, schemas, corrupt records, error handling, parallelized code), transformations and loads (user defined functions, join optimizations) and Production optimize (automate ETL) Each team member is expected to be aware of risk within their functional area. This includes observing all policies, procedures, laws, regulations and risk limits specific to their role. Additionally, they should raise and report known or suspected violations to the appropriate Company authority in a timely fashion. Performs other related duties as required. The information on this description has been designed to indicate the general nature and level of work performed by employees within this classification. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities, and qualifications required of employees assigned to this job. Synovus is an Equal Opportunity Employer committed to fostering an inclusive work environment.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
1,001-5,000 employees