About The Position

Kunai builds full-stack technology solutions for banks, credit and payment networks, infrastructure providers, and their customers. Together, we are changing the world’s relationship with financial services. At Kunai, we help our clients modernize, capitalize on emerging trends, and evolve their business for the coming decades by remaining tech-agnostic and human-centered. We're looking for a Senior Data Engineer to join our Data Infrastructure team — someone who doesn't just build pipelines, but shapes the foundation that every data-dependent team in the company relies on. This is a high-impact, high-autonomy role for an engineer who has seen what great data infrastructure looks like at scale and wants to build it. You will be a technical anchor for a strategic cloud migration from GCP to AWS, while simultaneously designing and building net-new pipelines and owning the reliability of what already exists. This isn't a role where you'll be handed a ticket queue. You'll help set the technical direction, make architecture decisions, and define the patterns that others will follow.

Requirements

  • Strong, hands-on Scala expertise with solid Python proficiency — you're comfortable switching between both and know when each is the right tool.
  • Deep experience with Apache Spark for both streaming and batch data processing at scale.
  • Proven track record running production ETL workloads on AWS (EMR, Glue) against terabytes of data.
  • Experience designing and operating data architectures using Delta Lake and the medallion (Bronze / Silver / Gold) pattern.
  • 8+ years of data engineering experience, with a track record of owning critical infrastructure end-to-end.

Nice To Haves

  • Familiarity with GCP data services and/or hands-on experience migrating data workloads from GCP to AWS.
  • Experience with frameworks like Apache Flink, Apache Beam, Airflow, or Databricks.

Responsibilities

  • Own the technical strategy and execution of migrating large-scale data workloads from GCP to AWS, ensuring continuity, data integrity, and minimal disruption.
  • Design migration playbooks and serve as the go-to expert for decisions across compute, storage, and orchestration layers during the transition.
  • Architect and implement scalable batch and streaming data pipelines using Apache Spark, Delta Lake, and the medallion architecture.
  • Establish standards for pipeline design, data quality, and observability that the broader engineering organization can build on.
  • Take accountability for the reliability, performance, and cost-efficiency of production ETL jobs running on AWS (EMR, Glue) against terabyte-scale datasets.
  • Proactively identify and address bottlenecks, technical debt, and opportunities to improve throughput and resilience.

Benefits

  • competitive compensation
  • professional development opportunities
  • flexible work arrangements
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service