About The Position

A Pyspark and Databricks Developer with a good understanding of the entire ETL/Azure lifecycle with a background of data projects. The information on this description has been designed to indicate the general nature and level of work performed by employees within this classification. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities, and qualifications required of employees assigned to this job. Synovus is an Equal Opportunity Employer committed to fostering an inclusive work environment.

Requirements

  • Bachelor's degree in computer science, information systems, or a related field or an equivalent combination of education and experience.
  • Three years of experience in Information technology including collecting business requirements related to usage of data, performing data mapping and conducting data quality assessments and developing, utilizing and writing new automation.
  • Understanding of reporting and/or visualization tools (e.g., Tableau, SSRS, SQL Server)
  • Knowledge of a variety of technologies, data models, and insights across all relevant data sources
  • Understanding of Governance principles
  • Understands concepts like data mining, extraction and analysis as it pertains to a specific bank pillar (e.g., Commercial, Retail, Wealth)
  • Analytical and critical thinking skills
  • Ability to quickly shift between technology stacks
  • Ability to mentor and train team members
  • Strong verbal and written communication skills
  • Azure Databricks
  • Python
  • Apache Spark
  • SQL
  • ETL processes
  • Data Warehousing
  • Data Pipeline Design
  • Cloud Architecture
  • Performance Tuning
  • #LI-SR1

Nice To Haves

  • Bachelor’s degree in computer science, Information Technology, or related field.
  • Minimum of 5 years of experience in data engineering or similar roles.
  • Proven expertise with Azure Databricks and data processing frameworks.
  • Strong understanding of data warehousing, ETL processes, and data pipeline design.
  • Experience with SQL, Python, and Spark.
  • Excellent problem-solving and analytical skills.
  • Effective communication and teamwork abilities.

Responsibilities

  • Design, develop, and maintain scalable data pipelines and ETL processes using Azure Databricks, Data Factory, and other Azure services
  • Implement and optimize Spark jobs, data transformations, and data processing workflows, Managing Databricks notebooks, Delta lake with Python, Delta Lake with Sparks SQL in Databricks
  • Leverage Azure DevOps and CI/CD best practices to automate the deployment /DAB Deployments and management of data pipelines and infrastructure
  • Ensure Data Integrity checks and Data Quality checks with zero percent errors when deployed to production
  • Understand Databricks new features Unity Catalog/Lake flow/DAB Deployments/Catalog Federation
  • Hands on experience Data extraction (extract, schemas, corrupt records, error handling, parallelized code), transformations and loads (user defined functions, join optimizations) and Production optimize (automate ETL)
  • Each team member is expected to be aware of risk within their functional area. This includes observing all policies, procedures, laws, regulations and risk limits specific to their role. Additionally, they should raise and report known or suspected violations to the appropriate Company authority in a timely fashion.
  • Performs other related duties as required.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service