There are still lots of open positions. Let's find the one that's right for you.
This is an Engineering Lead role with the following main responsibilities: Understand the overall domain architecture and manage changes within them. Expert in Big data development - python, Spark, Scala programming skills. Be responsible for the development of the conceptual, logical, and physical data models, the implementation of RDBMS, operational data store (ODS), data marts, and data lakes on target platforms. Oversee and govern the expansion of existing data architecture and the optimization of data query performance via best practices. The candidate must be able to work independently and collaboratively. Understand the applications within the domain and become knowledgeable on them. Able to work independently, self-enabled & motivated personality. Use the Escalation Matrix for tasks that are showstoppers or blockers to meet the timeline. Hands-on modeling, design, configuration, installation, performance tuning, and sandbox POC. Experience with data warehouse, data lake, and enterprise big data platforms in multi-data-center contexts required. Good knowledge of metadata management, data modeling, and related tools (Erwin or ER Studio or others) required. Basic Knowledge on Java, Spring, Spring Boot, Spring Cloud, Oracle, Elasticsearch, Hazelcast, Kafka, REST APIs, JSON/YML.