About The Position

Kunai builds full-stack technology solutions for banks, credit and payment networks, infrastructure providers, and their customers. Together, we are changing the world’s relationship with financial services. At Kunai, we help our clients modernize, capitalize on emerging trends, and evolve their business for the coming decades by remaining tech-agnostic and human-centered. We are partnering with a financial services client to hire Data Engineers to join a team working on large-scale data solutions. This role involves working closely with project teams to manage data ingestion processes, prepare pre-stage files, and support multiple key project milestones. You will: Prepare and manage pre-stage files for backbook conversion activities. Support and execute data ingestion tasks in alignment with scheduled project events, including key mock events. Monitor and ensure data ingestion completion within defined SLA windows. Troubleshoot and resolve ingestion issues promptly to maintain data integrity and project timelines. Maintain detailed documentation of ingestion processes, schedules, and completion statuses. Proactive problem-solving approach. Our success over the past 20 years is rooted in our exceptional team, which thrives in a culture of collaboration, creativity, and continuous learning. We are proud to offer our employees a range of benefits, including competitive compensation, professional development opportunities, and flexible work arrangements, all designed to help them thrive. As we continue to expand, we remain committed to cultivating an environment where people feel valued, have a voice, and are given the tools to grow—both personally and professionally—while pushing the boundaries of innovation in the fintech industry.

Requirements

  • 3+ years of experience with data engineering principles and working experience with data-driven decision-making platforms
  • Experience in Python Development and PySpark
  • 2+ years of experience working in large data environments
  • Experience with data ingestion processes and ETL workflows.
  • Experience with Snowflake or DataBricks
  • Expertise with high availability & distributed systems
  • Experience working with a variety of AWS services
  • Experience with serverless architectures and Lambdas
  • Familiarity with relational databases and querying languages such as SQL.
  • Strong attention to detail and ability to meet strict SLAs.
  • Effective communication skills, with ability to coordinate across teams.

Nice To Haves

  • Experience with scripting languages (e.g., Python, Shell scripting) to automate data ingestion tasks is a plus.

Responsibilities

  • Prepare and manage pre-stage files for backbook conversion activities.
  • Support and execute data ingestion tasks in alignment with scheduled project events, including key mock events.
  • Monitor and ensure data ingestion completion within defined SLA windows.
  • Troubleshoot and resolve ingestion issues promptly to maintain data integrity and project timelines.
  • Maintain detailed documentation of ingestion processes, schedules, and completion statuses.
  • Proactive problem-solving approach.

Benefits

  • competitive compensation
  • professional development opportunities
  • flexible work arrangements
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service