Data Platform Engineer

MizuhoNew York, NY
11d$111,000 - $150,000Hybrid

About The Position

Join Mizuho as a Data Platform Engineer! We are seeking a highly skilled Kafka Platform Engineer to design, build, and operate our enterprise event-streaming platform using Red Hat AMQ Streams (Kafka on OpenShift). In this role, you will be responsible for ensuring a reliable, scalable, secure, and developer-friendly streaming ecosystem. You will work closely with application teams to define and implement event-driven integration patterns, and you will leverage GitLab and Argo CD to automate platform delivery and configuration. This position requires a strong blend of platform engineering, DevOps practices, Kafka cluster expertise, and architectural understanding of integration/streaming patterns.

Requirements

  • Bachelor’s degree in computer science, Engineering, or a related field.
  • Proven experience with Kafka administration and management.
  • Strong knowledge of OpenShift and container orchestration.
  • Proficiency in scripting languages such as Python or Bash.
  • Experience with monitoring and logging tools (e.g., Splunk, Prometheus, Grafana).
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration skills.

Nice To Haves

  • Experience with Red Hat OpenShift administration.
  • Knowledge of service mesh patterns (Istio, OpenShift Service Mesh).
  • Familiarity with stream processing frameworks (Kafka Streams, ksqlDB, Flink).
  • Experience using observability stacks (Prometheus, Grafana).
  • Background working in regulated or enterprise-scale environments.
  • Knowledge of DevOps practices and tools (e.g., ArgoCD, Ansible, Terraform).
  • Knowledge of SRE Monitoring and logging tools (e.g., Splunk, Prometheus, Grafana).

Responsibilities

  • Design, deploy, and operate AMQ Streams (Kafka) clusters on Red Hat OpenShift.
  • Configure and manage Kafka components including brokers, Kraft, MirrorMaker 2,
  • Explore Kafka Connect, and Schema Registry concepts and implementations.
  • Ensure performance, reliability, scalability, and high availability of the Kafka platform.
  • Implement cluster monitoring, logging, and alerting using enterprise observability tools.
  • Manage capacity planning, partition strategies, retention policies, and performance tuning.
  • Define and document standardized event-driven integration patterns, including:
  • Event sourcing
  • CQRS
  • Pub/sub messaging
  • Change data capture
  • Stream processing & enrichment
  • Request-reply over Kafka
  • Guide application teams on using appropriate patterns that align with enterprise architecture.
  • Establish best practices for schema design, topic governance, data contracts, and message lifecycle management.
  • Implement enterprise-grade security for Kafka, including RBAC, TLS, ACLs, and authentication/authorization integration. (SSO and OAuth)
  • Maintain governance for topic creation, schema evolution, retention policies, and naming standards.
  • Ensure adherence to compliance, auditing, and data protection requirements (Encryption at Rest and flight).
  • Provide platform guidance and troubleshooting expertise to development and integration teams.
  • Partner with architects, SREs, and developers to drive adoption of event-driven architectures.
  • Create documentation, runbooks, and internal knowledge-sharing materials.
  • Build and maintain GitOps workflows using Argo CD for declarative deployment of Kafka resources and platform configurations.
  • Develop CI/CD pipelines in GitLab, enabling automated builds, infrastructure updates, and configuration promotion across environments.
  • Maintain Infrastructure-as-Code (IaC) repositories and templates for Kafka resources.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service