Data Platform Engineer

Paymentology
22hRemote

About The Position

At Paymentology , we're redefining what's possible in the payments space. As the first truly global issuer-processor, we give banks and fintechs the technology and talent to launch and manage Mastercard and Visa cards at scale across more than 60 countries. Our advanced, multi-cloud platform delivers real-time data, unmatched scalability, and the flexibility of shared or dedicated processing instances. It's this global reach and innovation that sets us apart. We're looking for a Data Platform Engineer to join our Data Engineering team and help build a modern data platform from the ground up. This is a greenfield opportunity focused on designing and implementing scalable data infrastructure, engineering robust data pipelines, and establishing observability, playing a critical role in enabling reliable, high-performance, and secure data systems. You'll work closely with data engineers, analysts, and senior technical stakeholders to design, implement, and operate the foundations of our data stack — from cloud infrastructure and data pipelines to storage and processing layers. This role is ideal for an experienced engineer with strong data platform and cloud infrastructure expertise who thrives in a high-impact, global fintech environment.

Requirements

  • 3-5 years of hands-on experience in Data Engineering, Platform Engineering, or DataOps roles.
  • Proven track record in designing and implementing reliable, scalable data platforms and data infrastructure — not just supporting, but owning end-to-end delivery.
  • Hands-on experience with modern data engineering tools such as dbt, Apache Airflow or Apache Kafka is required.
  • Hands-on proficiency with Infrastructure as Code (Terraform) and cloud architecture patterns on AWS or GCP.
  • Deep experience with AWS or GCP, including data storage and processing services (e.g., BigQuery, Snowflake, S3, Redshift).
  • Practical experience with Kubernetes and containerised workloads for orchestrating data platform services.
  • Experience implementing observability stacks for data platform monitoring, logging, metrics, and alerting.
  • Strong programming skills in Python, SQL, and Bash to build data pipelines, automate workflows, and perform data processing.
  • Excellent problem-solving skills and the ability to work effectively in a collaborative, fully remote environment.
  • A strong inclination to deepen expertise in data architecture, data modelling, and MLOps capabilities.

Nice To Haves

  • Experience with real-time data processing (e.g., Kafka, Spark Streaming) and both SQL and NoSQL data storage solutions is an advantage.

Responsibilities

  • Design and implement cloud-based data platform infrastructure using Infrastructure as Code (Terraform), with a strong focus on scalability, security, reliability, and cost-efficiency.
  • Build and maintain CI/CD pipelines that automate data engineering workflows, data pipeline deployments, and infrastructure provisioning, ensuring faster deployment cycles and minimizing errors.
  • Implement and operate observability solutions — integrating monitoring, logging, and metrics to ensure platform reliability, performance visibility, and fast incident response.
  • Collaborate closely with data engineers and cross-functional teams to design and implement data pipelines, data models, and platform capabilities that meet performance and business requirements.
  • Apply best practices for high availability, disaster recovery, security and cost optimization, while documenting infrastructure patterns, data architecture decisions, and operational procedures.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service