Design, build, and manage scalable data pipelines and enterprise integration workflows using tools like Kafka and Python-based frameworks. Build highly efficient and scalable ETL/ELT processes that support both incremental and full data loads, using frameworks like DBT and orchestrated through Apache Airflow. Telecommuting permitted: work may be performed within normal commuting distance from the Red Hat, Inc. office in Boston, MA. What You Will Do: Develop and maintain Python-based applications for data processing and transformation using libraries such as Pandas and NumPy. Architect and deploy cloud-native data solutions on public cloud platforms, ensuring scalability, security, and performance. Build and manage containerized applications using Red Hat OpenShift, enabling reliable deployment and orchestration across environments. Write and optimize complex SQL queries (CRUD and analytics) for structured and unstructured data across various databases. Integrate machine learning models and advanced analytics into production-grade data pipelines. Implement and maintain CI/CD pipelines based on GitOps methodology, with a focus on automation, testing, and streamlined deployment cycles. Troubleshoot complex system and data issues, minimizing downtime and ensuring data quality and reliability. Follow Agile methodologies and DevOps practices, actively contribute to sprint planning, retrospectives, and delivery cycles. Mentor junior engineers, conduct code reviews, and promote knowledge sharing across the team.