Adevinta-posted 3 months ago
Mid Level
Remote • Amsterdam, NY
5,001-10,000 employees
Web Search Portals, Libraries, Archives, and Other Information Services

Marktplaats in The Netherlands, and 2dehands and 2ememain in Belgium, are part of Adevinta: a global online classifieds specialist. The three brands are hosted on a multi-tenant platform, operated from our Amsterdam location, and are the top players in the classifieds space throughout the Benelux region. We offer consumers the opportunity to trade their unwanted products and contribute to a greener, circular economy. We offer businesses - of all sizes, from the smallest hobbyist to the biggest brands in Benelux - a platform to showcase their goods and services online to over 11 million monthly unique users. In the Marktplaats data and analytics teams, data is at the heart of everything we do. As a Data Engineer of the Data Platform team at Marktplaats you will be relied on to independently develop and deliver high-quality features for our new Data/ML Platform, refactor and translate our data products and finish various tasks to a high standard. You will be the cornerstone of the platform's reliability, scalability and performance, working hands on with batch and streaming data pipelines, storage solutions and APIs that serve complex analytical and ML workloads. The role encompasses ownership of the self-serve data platform, including data collection, lake management, orchestration, processing, and distribution.

  • Develop and deliver high-quality features for the Data/ML Platform.
  • Refactor and translate data products.
  • Ensure the reliability, scalability, and performance of the platform.
  • Work hands-on with batch and streaming data pipelines, storage solutions, and APIs.
  • Own the self-serve data platform, including data collection, lake management, orchestration, processing, and distribution.
  • 8+ years of hands-on experience in Software Development/Data Engineering.
  • Proven experience in building cloud native data intensive applications (real-time and batch based).
  • Strong background in Data Engineering to support other Data Engineers, Back Enders, and Data Scientists.
  • Hands-on experience of building and maintaining Spark applications.
  • Experience with AWS Cloud usage and data management.
  • Ensure data quality, schema governance, and monitoring across pipelines.
  • Experience with orchestrators such as Airflow, Kubeflow & Databricks workflows.
  • Solid experience with containerization and orchestration technologies (e.g., Docker, Kubernetes).
  • Fundamental understanding of various Parquet, Delta Lake and other OTFs file formats.
  • Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.
  • Experience with Databricks (Lakehouse, ML/MosaicAI, Unity Catalog, MLflow, Mosaic AI, model serving etc).
  • Data validation/analysis skills & proficiency in SQL.
  • Prior experience building and operating data platforms.
  • Experience with GCP is good to have.
  • An attractive Base Salary
  • Participation in our Short-Term Incentive plan (annual bonus)
  • Work From Anywhere: Enjoy up to 20 days a year of working from anywhere.
  • A 24/7 Employee Assistance Program for you and your family.
  • Collaborative environment with opportunities to explore your potential and grow.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service