Job Duties : 8-10 Years experience working in Data Engineering and Data Analysis. Hands on Experience in Hadoop Stack of Technologies ( Hadoop ,PySpark, HBase, Hive , Pig , Sqoop, Scala ,Flume, HDFS , Map Reduce). Hands on experience with Python & Kafka . Good understanding of Database concepts , Data Design , Data Modeling and ETL. Hands on in analyzing, designing, and coding ETL programs which involves Data pre-processing , Data Extraction , Data Ingestion , Data Quality ,Data Normalization & Data Loading. Working experience in delivering projects in Agile Methodology and hands on in Jira. Experience in Client Facing Roles with good communication & thought leadership skills to co-ordinate deliverables across the SDLC . Good understanding of Machine Learning Models and Artificial Intelligence preferred. Good understanding of Data Components , Data Processing & Data Analytics on AWS is good to have . Experience with data modeling tools like Erwin is good to have. Preferred Location : Cleveland or Pittsburgh. Master's/bachelor’s in computer science or equivalent fields
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
11-50 employees