Data Engineer II-Promo Analytics

Milwaukee ToolMenomonee Falls, WI
1d

About The Position

Data Engineer II- Promo Analytics Applicants must be authorized to work in the U.S.; Sponsorship is not available for this position. INNOVATE without boundaries! At Milwaukee Tool we firmly believe that our People and our Culture are the secrets to our success—so we give you unlimited access to everything you need to provide support to your business unit. Behind our doors you'll be empowered every day to own it, drive it, and do what it takes to support the biggest breakthroughs in the industry. Meanwhile, you'll have the support and resources of the fastest-growing brand in the construction industry to make it happen. Your Role on Our Team: As a Data Engineer II, you will play a critical role in enabling fast, accurate, and scalable data‑driven decisions at Milwaukee Tool. You will help build and evolve the pipelines, models, and governance frameworks that power analytics for retail promotions and enterprise-wide initiatives. Partnering with business teams and Data Platform engineers, you will turn requirements into high‑quality data products using Databricks and modern cloud technologies. Your work ensures that teams have timely, reliable, and well‑structured data to support operational reporting, strategic insights, and advanced analytics. You’ll be DISRUPTIVE through these duties and responsibilities:

Requirements

  • Bachelor’s degree in Computer Science, Information Systems or equivalent experience.
  • 3–5 years of experience in data engineering or a related technical field.
  • Strong proficiency in SQL and a programming language such as Python (preferred).
  • Experience building and orchestrating data workflows in Databricks, including Delta Lake, notebooks, jobs, and workflows.
  • Hands‑on experience with distributed data processing technologies such as Apache Spark.
  • Experience with cloud data ecosystems (Azure, AWS, or GCP), especially Azure Databricks.
  • Familiarity with cloud data warehouses such as Snowflake, Synapse, Redshift, or BigQuery.
  • Experience working with structured and semi‑structured data (Parquet, Avro, JSON, Delta).
  • Strong understanding of version control (Git) and modern CI/CD workflows.
  • Strong problem‑solving, debugging, and analytical skills.
  • Ability to work effectively in agile, cross‑functional engineering teams.

Nice To Haves

  • Experience with Databricks Unity Catalog, Delta Live Tables, or Databricks Workflows.
  • DataOps experience (pipeline observability, monitoring, automated quality).
  • Knowledge of metadata management or cataloging platforms (Purview, Collibra, Alation).
  • Experience with ML pipelines and feature engineering in Databricks.
  • Familiarity with streaming frameworks (Kafka, Event Hubs, Kinesis) used with Spark Structured Streaming.
  • Knowledge and experience working in an Agile environment.
  • Experience working with retail product promotion data.

Responsibilities

  • Design and build scalable data pipelines to ingest, transform, and curate data from a variety of systems including APIs, databases, files, and event streams.
  • Review functional requirements & design specs with Senior Data Engineers and business partners and converting to data transformations.
  • Implement and maintain data models such as dimensional models, star schemas, normalized models, and data vault approaches to support analytics and BI.
  • Work with Data Architects and Data Leads to optimize cloud‑based data platforms, ensuring performance, reliability, and cost‑efficient execution of data workloads.
  • Develop and enforce data quality checks, lineage, and monitoring to ensure accuracy, completeness, and trust in enterprise datasets.
  • Leverage your expertise within the software development lifecycle, continuous improvement, and best practices to help drive the team towards rapid success.
  • Automate and operationalize data pipelines using CI/CD, Infrastructure‑as‑Code, and modern orchestration tools.
  • Profile, tune, and optimize SQL, Python, and Spark workloads running in Databricks.
  • Author technical documentation, promote reusable components, and contribute to engineering standards and best practices.
  • Troubleshoot pipeline issues, participate in root‑cause analysis, and help maintain healthy, reliable data operations.
  • Performs other duties as assigned.

Benefits

  • Robust health, dental and vision insurance plans
  • Generous 401 (K) savings plan
  • Education assistance
  • On-site wellness, fitness center, food, and coffee service
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service