Cloud Data Specialist (Azure) - Vice President

Sumitomo Mitsui Banking CorporationCharlotte, NC
74dHybrid

About The Position

SMBC Group is a top-tier global financial group. Headquartered in Tokyo and with a 400-year history, SMBC Group offers a diverse range of financial services, including banking, leasing, securities, credit cards, and consumer finance. The Group has more than 130 offices and 80,000 employees worldwide in nearly 40 countries. Sumitomo Mitsui Financial Group, Inc. (SMFG) is the holding company of SMBC Group, which is one of the three largest banking groups in Japan. SMFG’s shares trade on the Tokyo, Nagoya, and New York (NYSE: SMFG) stock exchanges. In the Americas, SMBC Group has a presence in the US, Canada, Mexico, Brazil, Chile, Colombia, and Peru. Backed by the capital strength of SMBC Group and the value of its relationships in Asia, the Group offers a range of commercial and investment banking services to its corporate, institutional, and municipal clients. It connects a diverse client base to local markets and the organization’s extensive global network. The Group’s operating companies in the Americas include Sumitomo Mitsui Banking Corp. (SMBC), SMBC Nikko Securities America, Inc., SMBC Capital Markets, Inc., SMBC MANUBANK, JRI America, Inc., SMBC Leasing and Finance, Inc., Banco Sumitomo Mitsui Brasileiro S.A., and Sumitomo Mitsui Finance and Leasing Co., Ltd. Role DescriptionSMBC is looking for a Azure Cloud Data Engineer in the Production Support group with a strong Azure DataFactory and Databricks knowledge. Candidate should be able to provide support for Azure based data integration and analytics pipelines built using Azure Data Factory and Azure Databricks ensuring the uptime and performance of critical pipelines and workflows. This role should also have knowledge of complex interface process development and is required to troubleshoot issues.

Requirements

  • Recommended years of experience: 7
  • Strong hands-on experience with Azure Data Factory (ADF) - Pipeline orchestration, linked services, integration runtimes.
  • Experience in Azure Data Bricks – running and debugging notebooks, clusters, spark job logs.
  • Proficient in SQL – Writing/debugging queries, validating data. Good understanding of Azure Services: ADLS, Key Vault, Azure Monitor, Log Analytics.
  • Familiarity with Azure DevOps pipelines and Git Integrations.
  • Scripting knowledge: Python, PowerShell, or Bash.
  • Ability to work on weekends for maintenance, production implementations, recovery tests and system verifications / validations.
  • Ability to address production issues from home outside of normal business hours.
  • Extensive experience on Cloud solutions, specifically in Azure
  • Experience with Azure cloud services, Azure Data Factory, Gen 2, Azure Databases, Functions, Databricks, or similar technology.
  • Good understanding of ETL/ELT
  • Experience with RDBS systems like Azure SQL DB, Oracle, and NoSQL Databases like MongoDB
  • Understanding of indexing, partitioning, and other optimization techniques.
  • Experience with stored procedure, functions, and triggers.
  • Experience with Confluence, ServiceNow & JIRA.

Nice To Haves

  • Understanding of Spark concepts and Delta Lake (Preferred).
  • Knowledge on ETL DataStage Application would be a plus.
  • Azure Monitor, Application Insights, Log Analytics.
  • Cluster and pipeline-level metrics and logs.

Responsibilities

  • Monitor, troubleshoot and support ADF pipelines and Databricks notebooks/jobs in Production.
  • Analyze pipeline failures, spark job issues, data mismatches, cluster timeouts, resource unavailability and latency bottlenecks.
  • Root cause analysis for incidents and outages.
  • Understand the ADF Components and architecture.
  • Knowledge of data integration techniques and best practices.
  • Experience with connection to various data sources and destinations.
  • Ability to orchestrate complex data workflows and transformation.
  • Monitoring and troubleshooting data pipeline executions.
  • Familiarity with ADF data flow activities for data transformations.
  • Version control and deployment management using Azure DevOps or similar tools
  • Awareness of ADF Integration with Azure services like Azure Data Lake Storage, Azure Databricks, etc.
  • Skills in implementing streaming and batch data ingestion using Delta Lake.
  • Skills in implementing data pipelines and workflows in Databricks.
  • Familiarity with Databricks notebooks for interactive data exploration and development.
  • Integrating Databricks with Azure Services like ADLS Gen2, Azure SQLdb.
  • Monitoring and optimizing Databricks jobs for cost efficiency.
  • Proficiency in Git for managing code repositories, including branching, merging and pull requests.
  • Support CI/CD pipelines for deployment using Azure DevOps.
  • Participate in on-call rotation and ensure business continuity via proper DR strategies.
  • Ensure High Availability (HA) of ADF pipelines and auto-scaling/failover readiness of Databricks clusters.
  • Manage alerts, incidents and escalations using ServiceNow, Azure Monitor, Log Analytics etc.,
  • Proficiency in Git for managing code repositories, including branching, merging and pull requests.
  • Review and provide feedback on core code changes and support production deployment

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service