Java & Hadoop Senior Developer

Aarorn Technologies Inc.Toronto, ON

About The Position

This role requires a Senior Developer with extensive experience in Java, Spring Boot, and Big Data technologies like Hadoop and Kafka. The ideal candidate will have a strong understanding of microservices, containerization, cloud environments, and DevOps practices. Responsibilities include designing, developing, and optimizing applications, ensuring seamless inter-service communication, and troubleshooting performance issues.

Requirements

  • 6 to 8+ Years Experience
  • Hands-on expertise in Spring Boot, Java, REST API development, and Kafka-based event-driven architectures.
  • Strong understanding of microservices lifecycle: design, development, containerization (Docker), orchestration (Kubernetes), and deployment in cloud environments (AWS, Azure, GCP).
  • Experience in developing and optimizing Big Data applications using Java & Scala, Hadoop, Hive technologies.
  • Minimum 2+ years coding Scala programming.
  • Experience in Python, Scala, Shell Scripting, Apache Spark and Hive.
  • Proficient in database integration with SQL Server, Oracle, MySQL, and with strong SQL query optimization skills.
  • Well-versed with DevOps tools and practices, including CI/CD pipelines using Jenkins, Terraform, GitHub, Docker, and Kubernetes.
  • Skilled in monitoring, troubleshooting, and optimizing microservices using Splunk, CloudWatch, and log-based analysis.
  • Experience in Spring Boot with multiple microservices, enabling inter-service communication using Kafka (asynchronous messaging) and HTTP/REST APIs.
  • Familiar with gRPC as a generic protocol, though not implemented in production.

Responsibilities

  • Hands-on expertise in Spring Boot, Java, REST API development, and Kafka-based event-driven architectures.
  • Strong understanding of microservices lifecycle: design, development, containerization (Docker), orchestration (Kubernetes), and deployment in cloud environments (AWS, Azure, GCP).
  • Experience in developing and optimizing Big Data applications using Java & Scala, Hadoop, Hive technologies.
  • Minimum 2+ years coding Scala programming.
  • Experience in Python, Scala, Shell Scripting, Apache Spark and Hive.
  • Proficient in database integration with SQL Server, Oracle, MySQL, and with strong SQL query optimization skills.
  • Well-versed with DevOps tools and practices, including CI/CD pipelines using Jenkins, Terraform, GitHub, Docker, and Kubernetes.
  • Skilled in monitoring, troubleshooting, and optimizing microservices using Splunk, CloudWatch, and log-based analysis.
  • Experience in Spring Boot with multiple microservices, enabling inter-service communication using Kafka (asynchronous messaging) and HTTP/REST APIs.
  • Familiar with gRPC as a generic protocol, though not implemented in production.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service