Software Developer (Data Engineer).

EUREKA INFOTECH, INC.Hicksville, NY
49dRemote

About The Position

Software Developer (Data Engineer). Multiple Openings. Multiple Openings. Work in project migration from on- prem SSIS, SQL Server to Azure cloud. Develop complex database objects, stored procedures, functions, packages and triggers using SQL and PL/SQL. Write and test applications using Java. Test and debug issues. Work with Java, Javascript, Unix. Work on data movement from legacy Hive systems to ADLS using ADF pipelines. Create Synapse tables and develop configurations for the data movement according to the cloud data types. Participate in Data Migration using Spark SQL, Azure Blob Storage, and Azure Data Factory, Azure Synapse, SSIS. Work on historical data migration (HDM) from legacy systems to Azure Data Lake (Curated Zone) and Synapse using Azure Data Factory (ADF V2). Troubleshooting performance issues and implement partitioning and replace where functionality to avoid a table full load. Work on migrating the custom view scripts from SQL server to Synapse CZ. Create automated script using python for converting the view definitions compatible to cloud database and schema names. Creating, debugging, scheduling and monitoring jobs. Interact with SMEs, BSA, Technical lead and cross functional teams to gather requirements and discuss the approach for implementation of ETL logic for complex packages. Work in analysis, debugging and apply fixes to the defects in prod once the ETL jobs are made active. Work with Azure (ADLS Gen2, Azure Synapse, Azure Data Factory), Databricks , Python, Pyspark, Spark SQL, Git, SSIS, SSMS, DataDog, BitBucket. 40 hrs/wk. Must have Master's degree or equivalent in Computer Science, Electronics Engineering, Software Systems, or a related field (will accept a Bachelor's degree plus 5 years of progressive post baccalaureate experience in lieu of a Master's). Must have 2 years experience (or 2 years experience as Lead Technology, Lead Designer, Software Engineer, or related occupation). Must have 2 years experience developing complex database objects, stored procedures, functions, packages and triggers using SQL and PL/SQL; Writing and testing applications using Java; Testing and debugging issues; Working with Java, Javascript, Unix. Must be willing to travel/relocate to unanticipated locations throughout the US on short notice for extended periods of time. Telecommuting permitted.

Requirements

  • Must have Master's degree or equivalent in Computer Science, Electronics Engineering, Software Systems, or a related field (will accept a Bachelor's degree plus 5 years of progressive post baccalaureate experience in lieu of a Master's).
  • Must have 2 years experience (or 2 years experience as Lead Technology, Lead Designer, Software Engineer, or related occupation).
  • Must have 2 years experience developing complex database objects, stored procedures, functions, packages and triggers using SQL and PL/SQL; Writing and testing applications using Java; Testing and debugging issues; Working with Java, Javascript, Unix.
  • Must be willing to travel/relocate to unanticipated locations throughout the US on short notice for extended periods of time.
  • Telecommuting permitted.
  • Work with Java, Javascript, Unix.
  • Work with Azure (ADLS Gen2, Azure Synapse, Azure Data Factory), Databricks , Python, Pyspark, Spark SQL, Git, SSIS, SSMS, DataDog, BitBucket.

Responsibilities

  • Work in project migration from on- prem SSIS, SQL Server to Azure cloud.
  • Develop complex database objects, stored procedures, functions, packages and triggers using SQL and PL/SQL.
  • Write and test applications using Java.
  • Test and debug issues.
  • Work on data movement from legacy Hive systems to ADLS using ADF pipelines.
  • Create Synapse tables and develop configurations for the data movement according to the cloud data types.
  • Participate in Data Migration using Spark SQL, Azure Blob Storage, and Azure Data Factory, Azure Synapse, SSIS.
  • Work on historical data migration (HDM) from legacy systems to Azure Data Lake (Curated Zone) and Synapse using Azure Data Factory (ADF V2).
  • Troubleshooting performance issues and implement partitioning and replace where functionality to avoid a table full load.
  • Work on migrating the custom view scripts from SQL server to Synapse CZ.
  • Create automated script using python for converting the view definitions compatible to cloud database and schema names.
  • Creating, debugging, scheduling and monitoring jobs.
  • Interact with SMEs, BSA, Technical lead and cross functional teams to gather requirements and discuss the approach for implementation of ETL logic for complex packages.
  • Work in analysis, debugging and apply fixes to the defects in prod once the ETL jobs are made active.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

Professional, Scientific, and Technical Services

Number of Employees

51-100 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service