Data Engineer III - 328284

Grainger BusinessesLake Forest, IL
5dRemote

About The Position

W.W. Grainger, Inc., is a leading broad line distributor with operations primarily in North America, Japan and the United Kingdom. At Grainger, We Keep the World Working® by serving more than 4.5 million customers worldwide with products and solutions delivered through innovative technology and deep customer relationships. Known for its commitment to service and award-winning culture, the Company had 2024 revenue of $17.2 billion across its two business models. In the High-Touch Solutions segment, Grainger offers approximately 2 million maintenances, repair and operating (MRO) products and services, including technical support and inventory management. In the Endless Assortment segment, Zoro.com offers customers access to more than 14 million products, and MonotaRO.com offers more than 24 million products. For more information, visit www.grainger.com.  Grainger Corporate Services LLC is seeking a Data Engineer III in Lake Forest, IL with the following requirements: Bachelor’s degree in Information Technology, or related field plus 3 years of related experience. Prior experience must include: Created and managed scalable ETL pipelines for analytics dashboards, visualization, machine learning, abstractions, infrastructure. Work with AWS services implementation; Create and maintain optimal data pipeline architecture. Assemble large, complex data sets that meet functional and non-functional requirements; Build the infrastructure required for optimal extraction, transformation and loading of data from wide variety of data sources using Kafka and spark big data technologies; Create data tools for analytics and data scientists that assist them in building and optimizing product into innovative industry leader. Up to 25% domestic travel required to conferences and team meetings.100% remote work allowed from anywhere in the U.S.

Requirements

  • Bachelor’s degree in Information Technology, or related field plus 3 years of related experience.
  • Created and managed scalable ETL pipelines for analytics dashboards, visualization, machine learning, abstractions, infrastructure.
  • Work with AWS services implementation
  • Create and maintain optimal data pipeline architecture.
  • Assemble large, complex data sets that meet functional and non-functional requirements
  • Build the infrastructure required for optimal extraction, transformation and loading of data from wide variety of data sources using Kafka and spark big data technologies
  • Create data tools for analytics and data scientists that assist them in building and optimizing product into innovative industry leader
  • Up to 25% domestic travel required to conferences and team meetings.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service