- Own the technical roadmap and guide development best practices. - Build and optimize large-scale data workflows. - Champion innovation with AI/ML integration. - Ensure quality, security, and compliance across all deliverables. - Expertise in Python, Spark, and distributed computing. - Experience with HDFS, Oracle databases, and enterprise-scale systems. - Familiarity with Gen AI and ML frameworks (TensorFlow, PyTorch, Hugging Face). - Work on high-impact projects that leverage AI and big data. - Be part of a collaborative, forward-thinking team. BS in Computer Science or Computer Engineering or equivalent work experience 6-10+ years of experience in architecting data and analytics solutions on modern architectures 5+years' experience with one or more SQL on Big Data / Hadoop technologies (Hive, Impala, Spark SQL) 5+ years' experience with containers (Openshift Platform/Docker) Experience in systems scalability and performance optimization Experience with CI/CD continuous integration tools like GitHub Great communication and teamwork skills; must be able to work effectively with others within and across departments Excellent problem solving, critical thinking and a decision maker
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
5,001-10,000 employees