Omm IT Solutions-posted 4 months ago
Full-time
MD
11-50 employees

The selected candidate will be responsible for architecting, designing, developing, and implementing a next-generation data streaming and event-based architecture/platform using software engineering best practices in the latest technologies. This includes working with Data Streaming, Event Driven Architecture, and Event Processing Frameworks. The role also involves DevOps practices using tools like Jenkins, Red Hat OpenShift, Docker, and SonarQube, as well as Infrastructure-as-Code and Configuration-as-Code methodologies using Ansible, Terraform, and scripting. The candidate will administer Kafka, including automating, installing, migrating, upgrading, deploying, troubleshooting, and configuring on Linux. They will provide expertise in areas such as Kafka administration, event-driven architecture, automation, application integration, monitoring and alerting, security, business process management, CI/CD pipeline, and data ingestion/data modeling. The role requires investigating and repairing issues to ensure business continuity across various components, including the Kafka Platform, business logic, middleware, networking, CI/CD pipeline, or database. The candidate will also be responsible for briefing management, customers, teams, or vendors using appropriate technical communication skills.

  • Architect, design, develop, and implement data streaming and event-based architecture/platform.
  • Administer Kafka including automating, installing, migrating, upgrading, deploying, troubleshooting, and configuring on Linux.
  • Provide expertise in Kafka administration, event-driven architecture, automation, application integration, monitoring and alerting, security, and CI/CD pipeline.
  • Investigate, repair, and ensure business continuity across impacted components.
  • Brief management, customers, teams, or vendors using written or oral communication skills.
  • Perform all other duties as assigned or directed.
  • Bachelor's Degree in Computer Science, Mathematics, Engineering or a related field.
  • Masters or Doctorate degree may substitute for required experience.
  • 8+ years of combined experience with Site Reliability Engineering, providing DevOps support, and/or RHEL administration for mission-critical platforms, ideally Kafka.
  • 4+ years of combined experience with Kafka (Confluent Kafka, Apache Kafka, Amazon MSK).
  • 4+ years of experience with Ansible automation.
  • Must be able to obtain and maintain a Public Trust.
  • Strong experience with Ansible Automation and authoring playbooks and roles.
  • Solid experience using version control software such as Git/Bitbucket.
  • Hands-on experience administrating Kafka platform via Ansible playbooks or other automation.
  • Understanding of Kafka architecture, including partition strategy, replication, transactions, and disaster recovery strategies.
  • Strong experience in automating tasks with scripting languages like Bash, Shell, or Python.
  • Solid foundation of Red Hat Enterprise Linux (RHEL) administration.
  • Basic networking skills.
  • Solid experience triaging and monitoring complex issues, outages, and incidents.
  • Experience with integrating/maintaining various 3rd party tools like ZooKeeper, Flink, Pinot, Prometheus, and Grafana.
  • Experience with Platform-as-a-Service (PaaS) using Red Hat OpenShift/Kubernetes and Docker containers.
  • Experience working on Agile projects and understanding Agile terminology.
  • Confluent Certified Administrator for Apache Kafka (CCAAK) or Confluent Certified Developer for Apache Kafka (CCDAK).
  • Practical experience with event-driven applications and at least one event processing framework, such as Kafka Streams, Apache Flink, or ksqlDB.
  • Understanding of Domain Driven Design (DDD) and experience applying DDD patterns in software development.
  • Experience working with Kafka connectors and/or supporting operation of the Kafka Connect API.
  • Experience with Avro / JSON data serialization and schema governance with Confluent Schema Registry.
  • Preferred experience with AWS cloud technologies or other cloud providers; AWS cloud certifications.
  • Experience with Infrastructure-as-Code (CloudFormation / Terraform, Scripting).
  • Solid knowledge of relational databases (PostgreSQL, DB2, or Oracle), NoSQL databases (MongoDB, Cassandra, DynamoDB), SQL, or/and ORM technologies (JPA2, Hibernate, or Spring JPA).
  • Knowledge of Social Security Administration (SSA).
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service