Data Analytics Engineer

J.Mclaughlin
6dHybrid

About The Position

J.McLaughlin is a specialty American Sportswear and Accessories brand headquartered in New York. J.McLaughlin has the reputation for being local and loyal , building meaningful relationships within each community and providing customers with highly personalized service. We are a growing company with a focus on our culture of kindness, cultivating an exceptional atmosphere in which to work and shop. The Data Analytics Engineer reports to the IT Application Manager. This role will build and maintain the data infrastructure, transforming raw data into clean, reliable datasets for analysts and business users by designing data models, creating pipelines (ETL/ELT), implementing testing, and applying software engineering best practices, acting as a crucial bridge between data engineers and data analysts to enable data-driven decisions. Key responsibilities include data modeling, coding transformations (e.g. SQL and dbt), ensuring data quality through testing, documenting processes, and collaborating with stakeholders to meet business needs. This role is a hybrid role primarily based in our Greenpoint, Brooklyn office.

Requirements

  • Bachelor s degree in IT Applications or related field or equivalent experience required.
  • 1-2 years of analytics or BI experience, preferably in the retail industry.
  • 2-3 years of technical experience with SQL, Python, Data Warehouse/Data Lake (Snowflake, BigQuery, Redshift, AWS), dbt, Airflow, Data Modeling.
  • Strong problem solving, critical thinking, and data interpretation skills.
  • Excellent written and oral communication skills with emphasis on teamwork and attention to detail.
  • Ability to operate with objectivity, integrity, professionalism, and confidentiality.
  • Must be able to access and navigate each department at the organization's facilities.
  • Prolonged periods of sitting at a desk and working on a computer.

Responsibilities

  • Designing and implementing structured data models (e.g. star/snowflake schemas) in data warehouses.
  • Writing code (often SQL) to clean, aggregate, transform, and enrich raw data into analysis-ready formats.
  • Developing Automated tests and monitoring solutions for data accuracy and reliability.
  • Building and maintaining data pipelines (ETL/ELT) to move and process data.
  • Creating clear documentation for data models, transformations, and processes.
  • Working with data engineers to understand infrastructure and with analysts/stakeholders to define requirements.
  • Applying version control (Git) and CI/CD to analytics code.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service