Adyen-posted 3 months ago
Full-time • Mid Level
Chicago, IL
1,001-5,000 employees
Credit Intermediation and Related Activities

Adyen is looking for a Data Engineer II to join the Bank Reporting team in Chicago. Merchants use the products built by the Reporting teams to understand their usage of the Adyen platform in detail and perform financial reconciliation. This is a key workflow for our merchants' continued use of our financial products. As a result, this role requires a solid understanding of both the business context and its data needs, with a focus on building high-quality data pipelines on our Big Data Platform. As a Data Engineer at Adyen, you will be instrumental in maintaining our data infrastructure and ensuring smooth data flows across systems, as well as understanding how the Adyen platform works at a deep level and translating that complexity into simplicity for our merchants.

  • Engage with a diverse range of stakeholders, including data scientists, analysts, software engineers, product managers, and customers, to understand their requirements and craft effective solutions.
  • Design, develop, deploy and operate high-quality production ELT pipelines and data architectures. Integrate data from various sources and formats, ensuring compatibility, consistency, and reliability.
  • Help establish and share best practices in performance, code quality, data validation, data governance, and discoverability in your team and in other teams. Participate in mentoring and knowledge sharing initiatives.
  • Ensure data is accurate, complete, reliable, relevant, and timely. Implement testing, monitoring and validation protocols for your code and data, leveraging tools such as Pytest.
  • Identify and resolve performance bottlenecks in data pipelines and systems. Improve query performance and resource utilization to meet SLAs and performance requirements, using technologies Spark optimizations.
  • Around 3 years of experience working as a Data Engineer or in a similar role.
  • Proficient in tools and frameworks such as: Python, PySpark, Airflow, Hadoop, Spark, Kafka, Druid, SQL or similar data tools.
  • Able to effectively communicate complex data-related concepts and outcomes to a diverse range of stakeholders.
  • Capable of identifying opportunities, devising solutions, and handling projects independently.
  • You have an experimental mindset with a 'launch fast and iterate' mentality.
  • Skilled in promoting a data-centric culture within technical teams and advocating for setting standards and continuous improvement.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service