JOB SUMMARY: The Data Engineer I or II (TO) is responsible for translating data into readily consumable forms to deliver integrated data to consumers by building, operationalizing, and maintaining data pipelines for Data, Analytics & Artificial Intelligence use cases across heterogeneous environments. The Data Engineer I or II (TO) also plays a role in working with various data integration tools which support a combination of data delivery styles such as virtualization, data replication, messaging and streaming in hybrid and multi-cloud integration scenarios. JOB REQUIREMENTS: Education/Experience: Bachelor’s degree in computer science (CS), MIS, CIS, Mathematics, Statistics (Theoretical/Computational), Machine Learning or a related field. Proven knowledge of data engineering, data integration and data science principles are required 6+ years of related work experience in a fast-paced, competitive organization driven by data and enabled by technology Knowledge/Skills: Working experience with batch and real-time data processing frameworks. Working experience with data modelling, data access, schemas, and data storage techniques. Working experience with data quality tools. Experience in creating functional and technical designs for data engineering and analytics solutions. Experience implementing data models of different schemas and working with diverse data source types. Hands-on experience developing solutions with big data technologies such as Hadoop, HIVE and Spark. Hands-on experience developing and supporting Python based AI/ML solutions. 6+ years hands on experience designing, developing, testing, deploying, and supporting data engineering and analytics solutions using on-premises tools such as, Microsoft’s BI Stack (SSIS/SSAS/SSRS), Informatica, Oracle Golden Gate, SQL, Oracle, and SQL Server. 4+ years hands on experience designing, developing, testing, deploying, and supporting data engineering and analytics solutions using Microsoft cloud-based tools such as Azure Data Lake, Azure Data Factory, Azure Databricks, Python, Azure Synapse, Azure Key Vault, and Power BI. Experience with Containerization methodologies – Docker, OpenShift etc., Experience with Agile as well as DevOps, CI/CD methodologies. Hands-on experience designing and developing solutions involving data sourcing, enrichment and delivery using APIs & Web Services. Experience working with Jira or similar tools. Experience working with Kafka or similar tools.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Entry Level
Number of Employees
5,001-10,000 employees