About The Position

In this role, you will contribute to the development and refinement of our data processing pipeline while shaping new analytical methodologies. You’ll grow from owning specific steps of the pipeline to becoming an expert across data cleaning, preparation, and rule validation, working fluently with Python and large datasets. You will also design new methods, translate them into clear specifications, and collaborate with engineering teams to ensure accurate and scalable implementation. Close coordination with stakeholders helps you understand their needs, gather requirements, and secure alignment. This position is ideal for someone who enjoys handson coding, improving data processes, and driving methodological innovation within a collaborative, crossfunctional environment.

Requirements

  • Good understanding of consumer behavior, panel-based projections, and consumer metrics and analytics
  • Successfully designed and developed software applying statistical and data analytical methods and demonstrated your ability to handle complex data sets
  • Master’s / Doctorate Degree in Data Science, Mathematics, Statistics or BE/BTech. Computer Engineering degree in Computer Science, Data Science or related fields involving statistical analysis of large data sets
  • Experienced understanding processes also holistically. You enjoy connecting methodology, engineering, and business context to build solutions that work end‑to-end—spanning data ingestion, modeling, tooling, deployment, and impact evaluation.
  • Proficiency in manipulating, analyzing, and interpreting large data sets and experience presenting the findings and recommendations.
  • Experienced programming in Python, an ideal candidate has already implemented Statistical methods like outlier validation. Experience programming efficiently to process large amounts of data, optimally using the PySpark package. Optionally, experience in SQL and working with queries.
  • Strong communication, writing, and collaboration skills. Experience or interest in supporting cross-functional stakeholders in production deployment
  • Eagerness to adopt and develop evolving technologies and tools.

Nice To Haves

  • Experience with (un)managed crowdsourced panels and receipt capture methodologies is an advantage.
  • Strong statistical and logical skills, with experience in data cleaning, outlier validation, sampling, bias reduction, indirect estimation, and data aggregation techniques.
  • Knowledge in software engineering, including experience designing and developing software
  • Familiarity with technology stacks for cloud computing (AzureAI, Databricks, Snowflake), experience with version control systems GitHub or Bitbucket

Responsibilities

  • Contribute to the development and refinement of our data processing pipeline
  • Shape new analytical methodologies
  • Become an expert across data cleaning, preparation, and rule validation
  • Work fluently with Python and large datasets
  • Design new methods and translate them into clear specifications
  • Collaborate with engineering teams to ensure accurate and scalable implementation
  • Coordinate with stakeholders to understand their needs, gather requirements, and secure alignment

Benefits

  • Flexible working environment
  • Volunteer time off
  • LinkedIn Learning
  • Employee-Assistance-Program (EAP)
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service