At Axians, we value talent, not labels. We believe in a culture of inclusion, where everyone has a place and all applications are considered based on merit, without discrimination. This is your opportunity to join an international group with a project that needs you to help meet the challenges of digital transformation. 💻 THE ROLE We are looking for a #TechTalent to work as a Senior Data Engineer for a 1‑month international project. 💡 WHAT WE'RE LOOKING FOR Expert in Apache Kafka: configuring, managing brokers, partitioning, replication factors, and performance tuning. Advanced proficiency in Kafka Streams and Kafka Connect to build real-time pipelines, integrating systems from various sources and destinations. Strong understanding of the Confluent ecosystem, including Schema Registry, KSQL, and monitoring tools to ensure robust operations. Expertise in event-driven architectures and patterns such as publish-subscribe for distributed systems. Skilled in developing and managing APIs and microservices to integrate applications with data platforms. Proficient in programming languages like Java, Scala, and Python for real-time data stream processing. Hands-on experience with complementary technologies like Spark Streaming, Flink, or other real-time processing tools. Knowledgeable in relational and NoSQL databases (e.g., PostgreSQL, Cassandra, MongoDB). Practical experience with cloud environments (AWS, Azure, GCP) focusing on big data solutions. Familiarity with CI/CD tools (e.g., Git, Jenkins, Docker, Kubernetes) for automation and continuous delivery. Technical leadership skills to coordinate and mentor data engineering teams. Strong analytical abilities to solve complex problems and identify bottlenecks in data processing systems. Excellent communication skills to translate business requirements into technical solutions. Proactive in exploring new technologies and crafting strategic approaches to optimize processes and architectures. Designed and implemented high-performance streaming architectures using Kafka and Confluent for enterprise-scale projects. Built real-time data pipelines to process large volumes, integrating multiple enterprise systems. Applied best practices for data governance, quality control, and compliance (e.g., GDPR). Led initiatives to migrate monolithic systems to Kafka Streams-based architectures. Certifications (plus): Confluent Certified Developer for Apache Kafka. Cloud certifications (e.g., AWS Certified Solutions Architect, Google Cloud Professional Data Engineer). Additional certifications in Big Data tools or related technologies.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Career Level
Mid Level
Education Level
No Education Listed
Number of Employees
501-1,000 employees