Sr. Data Engineer

Asahi KaseiCharlotte, NC
5dHybrid

About The Position

The Asahi Kasei Group operates with a commitment of creating for tomorrow. Our business sectors, Material, Homes, and Health Care, contribute to the development of society by anticipating the changing needs of those around the world. We look for candidates that offer a fresh perspective and a variety of skills to help us achieve our commitment. We are currently seeking applications to fill the following job opening: Company: Asahi Kasei America, Inc. Job Description: Position Type: Full time / HYBRID - (Office - 1-2 times a week) Location: 3540 Toringdon Way, Suite 200, Charlotte, NC 28277 Description: Evaluate and improve T-SQL, MDX, DAX, HiveQL programming concepts such as queries, stored procedures, functions, temporary tables, parameterization, complex joins and groupings. Develop and optimize ETL/ELT pipelines, to load data from on premise and online systems. Ensure data solutions stability and performance optimization. Conduct data warehouse model design, development and support. Prepare, cleanse, validated datasets for data science purpose. Assist data troubleshooting, data featuring and data discovery. Develop tools to automate development and monitoring process. Develop python algorithms for data processing. Support Data science environment, assists data science projects. Manage time effectively to ensure that projects delivered on schedule. Provide on-going maintenance and support of existing and new data solutions. Support solution automation and CI/CD. Remote/Hybrid work schedule - Report to Charlotte, NC office (1-2 times per week) Travel - 5% including travel to Europe once per year.

Requirements

  • Bachelor’s Degree in Computer Science / Information technology or related field AND five (5) years as Data Engineer, Big Data Engineer, Data Architect, SQL Developer, Database Developer, or related.
  • Applicants must have 5 years’ experience with: Developing business intelligence solutions including data integration, data schema development, data pipelines, modeling and reporting/analytics Database design principles, data modeling, partitioning, and data warehouse Python, and Shell scripting SQL writing and query tuning, and query performance optimization data analysis, data modeling, data migration, computer programming, and problem-solving.
  • Applicants must have 4 years’ experience with: Data validation, cleansing, featuring: Pandas, Spark dataframes, and DQ solutions.
  • Applicants must have 3 years’ experience with: CI/CD. CDC.
  • Applicants must have 2 years’ experience with: big data pipeline development, monitoring and support: ETL, SSIS, Hadoop, HDFS, Spark, Hive, RDD, and UDF. cloud data ecosystem: Spark API, Spark SQL, PySpark, Scala, Python, and data Streaming.
  • Applicants must have demonstrative experience with: data science tools: Python ML, Scala, and Databricks.

Responsibilities

  • Evaluate and improve T-SQL, MDX, DAX, HiveQL programming concepts such as queries, stored procedures, functions, temporary tables, parameterization, complex joins and groupings.
  • Develop and optimize ETL/ELT pipelines, to load data from on premise and online systems.
  • Ensure data solutions stability and performance optimization.
  • Conduct data warehouse model design, development and support.
  • Prepare, cleanse, validated datasets for data science purpose.
  • Assist data troubleshooting, data featuring and data discovery.
  • Develop tools to automate development and monitoring process.
  • Develop python algorithms for data processing.
  • Support Data science environment, assists data science projects.
  • Manage time effectively to ensure that projects delivered on schedule.
  • Provide on-going maintenance and support of existing and new data solutions.
  • Support solution automation and CI/CD.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service