Mid-level Data Engineer

Info Resume EdgePhoenix, AZ
23h

About The Position

Mid-level Data Engineer Minimum Qualifications: Masters in computer applications or equivalent OR bachelors degree in engineering or computer science or equivalent. Deep understanding of Hadoop and Spark Architecture and its working principle. Deep understanding of Data warehousing concepts. Ability to design and develop optimized Data pipelines for batch and real time data processing. Experience in data analytics and cleansing 5+ years of software development experience. 5+ years experience on Python or JavaHands-on experience on writing and understanding complex SQL (Hive/PySpark-data frames), optimizing joins while processing huge amount of data. 3+ years of hands-on experience of working with Map-Reduce, Hive, Spark (core, SQLand PySpark). Hands on Experience on Google Cloud Platform (BigQuery, DataProc, Cloud Composer) Hands on Experience in Airflow 3+ years of experience in UNIX shell scripting Should have experience in analysis, design, development, testing, and implementation of system applications. Ability to effectively communicate with internal and external business partners. Additional requirements: Understanding of Distributed eco system. Experience in machine learning models, RAG, and NLP Experience in designing and building solutions using Kafka streams or queues. Experience with NoSQL i.e., HBase, Cassandra, Couchbase or MongoDB Experience with Data Visualization tools like PowerBI, Tableau, SiSense, Looker Ability to learn and apply new programming concepts. Knowledge of Financial reporting ecosystem will be a plus. Experience in leading teams of engineers and scrum teams

Requirements

  • Masters in computer applications or equivalent OR bachelors degree in engineering or computer science or equivalent
  • Deep understanding of Hadoop and Spark Architecture and its working principle
  • Deep understanding of Data warehousing concepts
  • Ability to design and develop optimized Data pipelines for batch and real time data processing
  • Experience in data analytics and cleansing
  • 5+ years of software development experience
  • 5+ years experience on Python or JavaHands-on experience on writing and understanding complex SQL (Hive/PySpark-data frames), optimizing joins while processing huge amount of data
  • 3+ years of hands-on experience of working with Map-Reduce, Hive, Spark (core, SQLand PySpark)
  • Hands on Experience on Google Cloud Platform (BigQuery, DataProc, Cloud Composer)
  • Hands on Experience in Airflow
  • 3+ years of experience in UNIX shell scripting
  • Experience in analysis, design, development, testing, and implementation of system applications
  • Ability to effectively communicate with internal and external business partners
  • Understanding of Distributed eco system
  • Experience in machine learning models, RAG, and NLP
  • Experience in designing and building solutions using Kafka streams or queues
  • Experience with NoSQL i.e., HBase, Cassandra, Couchbase or MongoDB
  • Experience with Data Visualization tools like PowerBI, Tableau, SiSense, Looker
  • Ability to learn and apply new programming concepts

Nice To Haves

  • Knowledge of Financial reporting ecosystem will be a plus
  • Experience in leading teams of engineers and scrum teams
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service