VP, Data Engineer

Sumitomo Mitsui Banking CorporationCharlotte, NC
Hybrid

About The Position

SMBC Group is a top-tier global financial group with a 400-year history, headquartered in Tokyo. It offers a diverse range of financial services and has a significant international presence. In the Americas, SMBC Group provides commercial and investment banking services to corporate, institutional, and municipal clients, connecting them to local markets and its global network. The Group's operating companies in the Americas include Sumitomo Mitsui Banking Corp. (SMBC), SMBC Nikko Securities America, Inc., SMBC Capital Markets, Inc., SMBC MANUBANK, JRI America, Inc., SMBC Leasing and Finance, Inc., Banco Sumitomo Mitsui Brasileiro S.A., and Sumitomo Mitsui Finance and Leasing Co., Ltd. This role involves building large-scale batch and real-time data pipelines using Azure cloud platform frameworks. The position requires designing and implementing high-performance data ingestion pipelines from multiple sources using Azure Databricks and Azure Data Factory. The Data Engineer will develop scalable and reusable frameworks for ingesting datasets, design and develop ETL, data integration, and data migration processes. Collaboration with architects, engineers, information analysts, and business/technology stakeholders is crucial for developing and deploying enterprise-grade platforms. The role includes integrating end-to-end data pipelines, ensuring data quality and consistency, and working with event-based/streaming technologies. Support for additional project components like API interfaces and Search is also expected, along with evaluating tools against customer requirements.

Requirements

  • Experience on ADLS, Azure Databricks, Azure SQL DB and Datawarehouse
  • Strong working experience in Implementation of Azure cloud components using Azure Data Factory , Azure Data Analytics, Azure Data Lake, Azure Data Catalogue and LogicApps.
  • Knowledge in Azure Storage services (ADLS, Storage Accounts)
  • Expertise in designing and deploying data applications on cloud solutions on Azure.
  • Hands on experience in performance tuning and optimizing code running in Databricks environment.
  • Good understanding of SQL, T-SQL and/or PL/SQL
  • Experience working in Agile projects with knowledge in Jira.
  • Demonstrated analytical and problem-solving skills particularly those that apply to a big data environment.

Nice To Haves

  • Good to have handled Data Ingestion projects in Azure environment.
  • Experience on Python scripting, Spark SQL PySpark is a plus.

Responsibilities

  • Build large-scale batch and real-time data pipelines with data processing frameworks in Azure cloud platform.
  • Design and implement highly performant data ingestion pipelines from multiple sources using Azure Databricks.
  • Direct experience of building data pipelines using Azure Data Factory.
  • Develop scalable and re-usable frameworks for ingesting of datasets.
  • Design and Develop of ETL, data integration and data migration.
  • Partner with architects, engineers, information analysts, business, and technology stakeholders for developing and deploying enterprise grade platforms that enable data-driven solutions.
  • Integrate the end-to-end data pipeline - to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times.
  • Work with event based / streaming technologies to ingest and process data.
  • Work with other members of the project team to support delivery of additional project components (API interfaces, Search).
  • Evaluate the performance and applicability of multiple tools against customer requirements.

Benefits

  • Hybrid workforce model that provides employees with an opportunity to work from home, as well as, from an SMBC office.
  • Reasonable accommodations during candidacy for applicants with disabilities consistent with applicable federal, state, and local law.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service