Confluent Kafka Lead / Python Developer

CapgeminiNashville, TN
Hybrid

About The Position

We are seeking a Confluent Kafka Lead / Python Developer to design, build, and operate enterprise-scale event streaming platforms. This role combines hands-on engineering with technical leadership, supporting event-driven architectures, real-time data pipelines, and data platform modernization initiatives. The ideal candidate will bring expertise in Kafka/Confluent Platform and Python development, alongside experience in cloud-native environments, DevOps practices, and scalable distributed systems.

Requirements

  • 6–10+ years of software engineering experience
  • 4+ years of hands-on Kafka / Confluent Platform experience
  • Strong experience in Python development
  • Confluent Kafka ecosystem (Kafka, Schema Registry, Connect, ksqlDB)
  • Python for streaming and backend development
  • Event-driven architecture and distributed systems
  • CI/CD pipelines and DevOps practices
  • Cloud platforms (AWS, Azure, or GCP)
  • Containerization (Docker, Kubernetes)
  • Strong problem-solving and analytical skills
  • Ability to work in high-scale distributed environments
  • Effective communication and collaboration across teams
  • Leadership and mentoring capabilities

Nice To Haves

  • Experience with stream processing frameworks
  • Understanding of event sourcing patterns
  • Data platform integration experience
  • Relevant certifications in Kafka, cloud, or data engineering

Responsibilities

  • Lead the design, implementation, and operation of Confluent Kafka-based solutions
  • Manage and administer Kafka components including: Kafka Brokers, Schema Registry, Kafka Connect, ksqlDB
  • Define topic design, schema standards, and data governance practices
  • Ensure high availability, scalability, and fault tolerance across environments
  • Develop Python-based producers, consumers, and streaming applications
  • Build real-time data pipelines using event-driven design principles
  • Implement efficient data processing and transformation logic
  • Design and implement event-driven system integrations across enterprise platforms
  • Support real-time data streaming and messaging use cases
  • Apply best practices for event sourcing and stream processing
  • Implement and maintain CI/CD pipelines
  • Contribute to Infrastructure as Code (IaC) initiatives
  • Deploy and manage applications using containerization technologies (Docker/Kubernetes)
  • Monitor system health, troubleshoot issues, and ensure production stability
  • Implement security standards, access controls, and compliance practices
  • Ensure data governance and schema validation
  • Maintain observability, logging, and monitoring across systems
  • Provide technical guidance and mentorship to engineering teams
  • Review system designs and code to ensure best practices
  • Collaborate on platform roadmap, scalability planning, and capacity management

Benefits

  • Paid time off based on employee grade (A-F), defined by policy: Vacation: 12-25 days, depending on grade, Company paid holidays, Personal Days, Sick Leave
  • Medical, dental, and vision coverage (or provincial healthcare coordination in Canada)
  • Retirement savings plans (e.g., 401(k) in the U.S., RRSP in Canada)
  • Life and disability insurance
  • Employee assistance programs
  • Other benefits as provided by local policy and eligibility
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service