About The Position

Modash gives brands the tools to work with the right content creators and helps creators earn a living doing what they love. Behind the scenes, the Data Insights team is building the intelligence layer that turns raw social media signals into trusted, customer-facing data products — with reliable access, quality, and freshness at scale. We’re looking for a hardened Senior Product Data Engineer to help us scale these systems end-to-end, raise our quality bar, and accelerate how quickly we turn messy public data into consistent, valuable insights customers can build on. The Data Insights team is a specialized team in the Data Org, and you’ll own high impact projects end-to-end, from idea to launch. You’ll be working on big, impactful projects like: Creating an understanding of the creators location, age, and interests at scale, Creating systems to extract collaborations between creators and brands from raw social data, and Shaping the future of AI-assisted search, exploring how LLMs and embeddings can enhance search and recommendations. You won’t be patching pipelines — you’ll be creating data products from scratch that directly impact customers. At Modash, the Data Insights team isn’t a support function — it’s a core part of the product. You’ll join a growing group of data and backend engineers, working within our broader Data organization. We work in three closely aligned teams within Data: Data Insights — builds the creator and brand-level insight products and APIs (e.g., collaborations, reports, dictionaries, contacts, audience overlap). Data Search — owns our search products (including AI Search) end-to-end. Data Core — responsible for raw data collection and the foundations of our data platform. We value autonomy, but we also work closely as a team — through pair programming, fast feedback loops, and shared wins. Everyone is expected to take ownership, but nobody works in isolation. We’re remote-first, and we also make time to connect IRL through regular team offsites — to have fun, collaborate, and reflect.

Requirements

  • Strong knowledge of Spark (Scala, Databricks, or PySpark; PySpark preferred but not required).
  • Proven track record with ETL/ELT pipelines and large-scale data processing.
  • Comfortable working with unstructured data.
  • Experience with workflow orchestration tools like Airflow or AWS Step Functions.
  • Familiarity with the AWS ecosystem (Glue, EMR, etc.).
  • Shipped full features from idea to production: planning & scoping → architecture → implementation → release → iteration.
  • Based in Europe with significant working-hours overlap with EET (Tallinn time).
  • Hands-on experience building agentic / LLM-powered features in production.
  • Practical understanding of trade-offs between LLMs (cost, latency, capability).

Nice To Haves

  • Worked with AI/ML tools or LLMs.
  • Familiar with the GCP stack (especially Vertex AI).
  • Worked with lakehouse formats like Apache Iceberg.
  • Used Pulumi or Terraform for IaC.
  • Familiar with Node.js and TypeScript.
  • Understand AWS cost mechanics (how scale impacts spend).
  • Care deeply about code quality and system design.
  • Curious about the creator economy.

Responsibilities

  • Own high impact projects end-to-end, from idea to launch.
  • Create an understanding of creators' location, age, and interests at scale.
  • Develop systems to extract collaborations between creators and brands from raw social data.
  • Shape the future of AI-assisted search, exploring how LLMs and embeddings can enhance search and recommendations.
  • Create data products from scratch that directly impact customers.

Benefits

  • Fully remote — work from anywhere in Europe
  • Unlimited paid vacation
  • Flexible hours — async-friendly culture
  • High ownership, low bureaucracy, no bullshit.
  • Personal development support — courses, books, or conferences on us
  • Regular offsites — connect with your team IRL
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service