Senior Platform Engineer (Kafka/Flink)

FiservAlpharetta, GA
2d$110,000 - $186,000Onsite

About The Position

This role supports Fiserv’s Global Kafka Event Streaming Services platform, focusing on the engineering, automation, and operation of highly scalable, secure, and highly available real‑time streaming infrastructure. The engineer will design and operate Kafka and Apache Flink platforms across cloud and on‑prem environments, enable CI/CD and infrastructure automation, and partner closely with application teams and architects to support mission‑critical, low‑latency event streaming workloads.

Requirements

  • 6+ years of with one or multiple Middleware, Messaging, Streaming and API Management Products.
  • 3+ years with containerization of middleware products.
  • 3+ years of experience in AWS, Azure, PCF and Kubernetes platforms.
  • 3+ years of experience in Automation for IaaC platform such as but not limited to Chef, Puppet, Spacewalk etc.
  • 3+ years of experience with multiple scripting and programming languages such as Python, Perl, Shell, PowerShell etc.
  • 3+ years of experience with Infrastructure automation and orchestration using Rundeck, Ansible, and Terraform, HP/Microfocus, Splunk, BMC etc.
  • Good understanding of infrastructure technologies like Linux OS, Active Directory domains, load balancing, gateways, firewalls, authentication/authorization, certificates, Splunk, Datadog, Dynatrace, etc.
  • Bachelor’s degree in computer science, information security, or a relevant field, and/or equivalent military experience

Nice To Haves

  • Experience in the financial services industry.
  • Experience in infrastructure topologies.

Responsibilities

  • Design, deploy, manage, and troubleshoot Kafka and Flink clusters in Cloud or On-Prem environments, on both VM's and Kubernetes.
  • Build and maintain CI/CD pipelines for seamless integration and automation of Event Streaming services.
  • Automate infrastructure provisioning using tools like Terraform, Ansible, or CloudFormation.
  • Monitor and optimize Kafka and Flink infrastructure performance.
  • Collaborate with developers and architects to build & support Event Streaming solutions while leveraging tools such as Kafka Connect, Schema Registry, Flink, Snowflake, Streams Replication Manager (Mirror Maker 2.0) etc.
  • Ensure observability through monitoring tools, like Grafana, Moogsoft and Dynatrace, to visualize system health and performance metrics.
  • Evaluate business requirements, functional and technical.
  • Develop and document technical and non-technical infrastructure related processes.
  • Work with key internal and external stakeholders to create a comprehensive roadmap for technology products.
  • Review and implement processes, procedures and policies for product engineering and operations.
  • Participate in on-call group on a rotation basis for supporting business operations, nationally and internationally.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service