Data Engineer

MJ Holding Company LLCBridgeview, IL
just now

About The Position

The Data Engineer is a key member of the data team, responsible for designing, building, and maintaining reliable data pipelines, data models, and reporting solutions that power the organization’s data warehouse, analytics, and business intelligence needs. This role emphasizes hands-on development, collaboration with peers and business partners, and contributing to the organization’s growing use of data.

Requirements

  • Strong technical ability with SQL, scripting languages (e.g., Python), and cloud data tools.
  • Detail-oriented with good organizational skills.
  • Ability to communicate effectively with technical and non-technical stakeholders.
  • Problem-solving mindset with the ability to manage multiple tasks.
  • Team player with a proactive and flexible approach.
  • Comfort with version control (e.g., Git) and collaborative development practices.
  • Bachelor’s degree in computer science, Engineering, Data Science, or related field.
  • 2–4 years of experience in data engineering or related role.
  • Hands-on experience building and maintaining data pipelines and working with modern data platforms (e.g., Snowflake, Azure SQL, BigQuery).
  • Proficiency in SQL and experience with performance tuning.
  • Exposure to data modeling and data warehousing concepts.
  • 2+ years hands-on experience developing dashboards and data models in BI tools (e.g., Power BI, Tableau), including DAX/MDX, calculated fields, and performance optimization.
  • Familiarity with ETL/ELT tools (SSIS, dbt, Airflow, or cloud-native equivalents).
  • Experience working directly with business stakeholders to translate requirements into BI solutions.
  • Strong understanding of dimensional modeling and how it supports BI/analytics.

Nice To Haves

  • Experience with cloud environments (AWS, Azure, or Google Cloud).
  • Knowledge of data governance, quality, and cataloging practices.
  • Industry experience in retail distribution, category management, or a similar domain is a plus.

Responsibilities

  • Develop, maintain, and optimize ELT/ETL pipelines using tools such as Apache Airflow, dbt, or cloud-native services (e.g., AWS Glue, Azure Data Factory).
  • Support the design and implementation of data warehouse and analytics solutions in platforms like Snowflake, BigQuery, or Databricks.
  • Assist with building and maintaining data models and structures (dimensional/star schemas, denormalized tables) to support reporting and analysis.
  • Help implement and follow data governance practices to ensure accuracy, consistency, and security.
  • Monitor data quality and apply validation, cleansing, and profiling techniques using tools like dbt tests, Great Expectations, or Python scripts.
  • Collaborate with business stakeholders to gather reporting and analytics requirements.
  • Translate business needs into semantic models, KPIs, and dashboards in tools like Power BI or Tableau.
  • Ensure data warehouse structures are optimized for self-service BI and ad hoc analysis.
  • Perform SQL development and optimization for queries, stored procedures, and transformations.
  • Troubleshoot and resolve data and pipeline issues.
  • Document data processes, flows, and business logic to ensure solutions are well understood and maintainable.
  • Work closely with analysts, developers, and business stakeholders to understand requirements and deliver solutions that meet their needs.
  • Stay current with data engineering practices and tools to continuously improve team efficiency.

Benefits

  • Competitive salary and benefits package
  • Opportunities for professional growth, career development, and ongoing training
  • Collaborative, innovative, and supportive work environment
  • Access to modern tools and technologies in the data space
  • Chance to work on impactful projects shaping the future of the organization
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service