Data Scientist (944)

American Builders and Contractors Supply CoChicago, IL

About The Position

ABC Supply is North America’s largest wholesale distributor of exterior and interior building products. ABC Supply is proud to be an employee-first company. In fact, we have won the Gallup Exceptional Workplace Award every year since its inception in 2007, and Glassdoor has named us one of the best places to work in the country. Be part of a company that recognizes your talents, rewards your efforts, and helps you reach your full potential. At ABC Supply, we have YOUR future covered. ABC Supply is currently seeking a Data Scientist to deliver advanced analytics solutions for the enterprise. ABC has built a world-class analytics platform in Azure using products like Data Lake and Databricks. You will be working on the leading edge of data science for the distribution industry, focusing on developing and deploying solutions that drive business value and improve operational efficiency. As a member of the Data and Analytics team at ABC, you will be delivering high-impact solutions driven by our executive leadership. The Data Scientist will report to the Manager of Data Science and partner with their functional team. This team will organize around a business domain and consist of a product manager, engineers (data, BI, and software), and specialists/analysts. The Data Scientist is responsible for solution development, data exploration, and model building across a range of business problems. They will work closely with the business to understand operations and the data supporting those operations, identify data sources (both internal and external), and collaborate with data engineers to create the data pipelines needed to support analytic solutions. This role is ideal for someone who has a strong foundation in machine learning and statistics, writes clean and reproducible code, and is eager to grow into advanced areas such as GenAI/LLM applications, MLOps, or deep learning within an enterprise setting.

Requirements

  • Master's degree in a quantitative discipline (e.g., Statistics, Engineering, Sciences), or equivalent practical experience.
  • 1-3 years of experience using analytics to solve product or business problems, coding (e.g., Python, R, SQL), querying databases, or statistical analysis.
  • Demonstrable understanding and practical experience with a range of machine learning, statistical modeling, and/or operations research techniques, with depth in at least one core domain: supervised learning, time-series forecasting, clustering/segmentation, or optimization.
  • Significant experience with common packages used for data wrangling (e.g., pandas, tidyverse), modeling (e.g., scikit-learn, caret/tidymodels), and data visualization (e.g., seaborn, matplotlib, ggplot2).
  • Understanding of descriptive and inferential statistics, e.g. hypothesis testing, confidence intervals, distributions, correlations.
  • Understanding of the steps involved in the machine learning life cycle, including data selection and preparation, feature engineering, model training, model selection, model testing (cross-validation, A/B testing), model interpretation, and inference.
  • Basic understanding of data management principles and architectures, including data warehouses, data lakes, and lakehouses, as well as the data integration (ETL/ELT) tools around them.
  • Experience with at least one enterprise data visualization tool (Tableau preferred).
  • Experience with one or more cloud data platforms — Azure and Databricks preferred.
  • Openness to adopting AI-powered development tools and assistants to enhance coding speed, debugging, and workflow efficiency.
  • Enjoy working in a collaborative, team environment. Data science projects involve technical professionals from data engineering, DevOps, business intelligence, and business experts, and it is essential to develop and maintain positive relationships with these professionals.
  • Strong communication skills, comfortable explaining technical concepts in common business terms to the business community and technical professionals from different backgrounds.
  • Strong technical aptitude with a passion for continuous learning.

Nice To Haves

  • Familiarity with experiment tracking and model management tools (e.g., MLflow).
  • Exposure to GenAI/LLM concepts: prompt engineering, retrieval-augmented generation (RAG), embeddings, or fine-tuning.
  • Exposure to deep learning frameworks (e.g., PyTorch, TensorFlow).
  • Familiarity with operations research concepts such as linear programming, mixed-integer programming (MIP), or combinatorial optimization — applied to problems like vehicle routing, inventory optimization, or resource allocation.
  • Experience with version control systems (Git) and collaborative development workflows.
  • Understanding of MLOps principles: model deployment, monitoring, automated retraining.
  • Familiarity with agile or Scrum methodologies.

Responsibilities

  • Develop machine learning and statistical models for a variety of use cases, including classification, clustering/segmentation, optimization, forecasting, and natural language processing.
  • Wrangle, cleanse, and transform data from multiple sources to enable solution development.
  • Carry out ad-hoc or targeted analyses to answer business or technical questions for a wide variety of stakeholders in the organization.
  • Collaborate with project stakeholders to gather requirements, data, and other information necessary to achieve project goals, and to develop strategic solutions to ensure that project goals are achieved.
  • Work closely with data engineers to design and build the data pipelines that feed analytic solutions.
  • Actively participate in the software development lifecycle for data science-based applications, including design and development, testing, deployment, and support.
  • Support the ongoing maintenance and enhancement of deployed models, including monitoring model performance and data drift.
  • Leverage AI-powered coding and development tools (e.g., GitHub Copilot) to accelerate development workflows, improve code quality, and increase personal productivity.
  • Create and maintain documentation using appropriate tools, including wikis for narrative documentation and version-controlled repositories for technical documentation.
  • Research opportunities for new model usage and new uses for existing data.
  • Stay current on advances in machine learning, statistical modeling, deep learning, and GenAI/LLM techniques and proactively identify opportunities to apply them.
  • Operate in an agile environment intended to facilitate iterative development and continuous delivery.
  • Effectively communicate with colleagues on both technical and non-technical topics, across a variety of communications platforms.

Benefits

  • Health, dental, and vision coverage - eligible after 60 days, low out of pocket
  • 401(k) with generous company match - eligible after 60 days, immediately vested
  • Employer paid employee assistance program
  • Employer paid short term and long-term disability
  • Employer paid life insurance
  • Flex spending
  • Paid vacation
  • Paid sick days
  • Paid holidays
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service