Senior Python Engineer

FXC Intelligence
6hRemote

About The Position

We are looking for a Senior Python Engineer to join our Data Ingest team, responsible for scraping the data that fuels our products. You’ll play a key role in shaping how we capture, process and scale the data pipelines that underpin our insights to some of the world's biggest companies, central banks and non-governmental organisations. What success looks like in this role: Carrying out research and technical investigations Implementing new scrapers and maintaining existing ones Improve the scraping platform in the areas of data quality, performance and observability Taking on ownership of architectural or conceptual components of the platform (e.g. owning reliability or QA processes or AI scrapers) Gain sufficient understanding of the codebase to be able to make contributions aimed at reducing overall complexity and removing technical debt Performing code reviews, participating in intra and cross-team technical initiatives and discussions You will join a technical team of c. 40 technologists with a key purpose of continuously and reliably collecting large amounts of financial data describing cross-border payments, then processing the data, checking the quality and serving it to our users so that they can make the best business decisions The Data Ingest team is the oldest team at FXC Intelligence, existing in some form for over seven years now. We provide the platform for collecting unstructured data from open third-party websites and mobile apps, and make sure that the data collected corresponds to the company's data quality standards and SLAs. As the business grows and requirements change, so do we – the platform has survived several iterations and is currently in the process of being migrated to its fourth generation. The team also maintains over 200 scrapers for different providers of data, which power most of the company’s critical datasets

Requirements

  • Engineer robust solutions using advanced Python and SQL to solve complex data and backend challenges
  • Work with data processing and distributed high-load systems
  • Experience with event-driven architecture and message queues (RabbitMQ/Kafka)
  • Perform basic exploratory data analysis and make technical decisions on the basis of observed data rather than just assumptions
  • Take proactive ownership of projects, setting a high bar for technical excellence
  • Thrive in fast-paced environments, pivoting quickly to meet changing priorities without losing momentum
  • Articulate complex ideas clearly to both technical and non-technical stakeholders
  • Deep-dive into the business domain to ensure your technical output aligns with product goals and user needs
  • Approach every challenge with a positive attitude and a proactive drive to learn new technologies and methodologies

Nice To Haves

  • Prior experience working for a data product company or being involved with Big Data
  • Build and maintain scalable web scraping architectures and backend systems that power core product features
  • Experience with documentation of system architecture
  • Experience collaborating with a BI, DA and/or ML team

Responsibilities

  • Carrying out research and technical investigations
  • Implementing new scrapers and maintaining existing ones
  • Improve the scraping platform in the areas of data quality, performance and observability
  • Taking on ownership of architectural or conceptual components of the platform (e.g. owning reliability or QA processes or AI scrapers)
  • Gain sufficient understanding of the codebase to be able to make contributions aimed at reducing overall complexity and removing technical debt
  • Performing code reviews, participating in intra and cross-team technical initiatives and discussions
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service