Multiple Product Forecasting in the construction industry

A custom deep neural network to predict stock

In this article, we will cover a use case in the construction industry related to forecasting the needed materials for construction and the time in which they will be required. In the construction industry, there is a lot of uncertainty between the order time and the time in which it is actually executed, due to several factors which will be described in detail below.

Business Problem

Let’s cover the case where we want to buy heavy industry materials from a supplier, but we only have a high-level estimate of the amount we will need. We are not sure right away the exact time and characteristics of the materials that will be needed, since there might be some delays in the project, and changes between order and execution. We have clients that are executing constructions and contact us with preliminary orders with their requirements.

We need to know several things about this process:

  • When will this specific client execute the order?
  • What material characteristics will be preferable for this customer?
  • What are the best materials to keep in stock right now and by which amount? He needs some fixed time (e.g. 4 weeks) to build them and transport them to a storage facility. If we keep too little in stock, we’ll delay the construction. If we have too much, unused materials can degrade and will waste valuable storage capacity.

Data Entities

Let’s consider the following data entities and associated historical data for this problem:


Dispatches the raw materials we need.
  • Supplier ID
  • ZIP Code


Executes the construction.
  • Builder ID
  • ZIP Code


The site which is being built.
  • Location
  • Construction Type (Residential/Industrial, industry type)
  • Building Area/Dimensions
  • Number of expected workers


A material request.
  • Material Type (e.g. Cement, beams, rebars)
  • Characteristics: e.g. – Amount of Cement 3000kg,  T-bar beams width 3”, rebars 1/2”, strength, …


For each stakeholder involved in the process, there are things causing uncertainty in the questions described above. We may over or underestimate the amount/quality of the required materials (both from inaccurate information from the construction plans or from internal uncertainties in the estimates). The builders can waste or use in a more efficient way a certain material. The delay between order and execution depends on the complexity of the construction process, time of the year (holidays!), and supplier bottlenecks, among others.

Relevant Features

There are some relevant features that can be extracted to make this problem easier to predict:

    • Construction x Material: Amount and type of each material needed at order time and execution time for a certain construction. This tells us which constructions over- and underestimated certain materials, and what was the delay between order and execution. It will be used for building the targets of our problem.
    • Builder/Supplier: Statistics for the historical differences in material amount/characteristics between order time and execution time (e.g. ordered 3 tons cement, only needed 2.5 in the past 3 months, on average)
    • Time: Time of the year (month, quarter, season, …) and historical features on the difference between order time and execution time.


For simplification purposes, let’s assume we only want to predict a single material: e.g. beams needed for a single unit in the construction.

We need to determine:

  • Required number of beams
  • Time until the order is executed, after it’s ordered with some initial characteristics

Option 1 – Multitask Regression Model

In this initial approach, we take the features at order time and try to predict the number of beams needed, and the number of days between order and execution. This is done using a multitask model, with two regression tasks.

The advantages of this approach are that it is easy to set up, the targets are easy to interpret, makes the model more robust, and might increase the performance.  However, there are several disadvantages:

  • There’s a set of defined templates for the beams (SKU – Stock Keeping Unit) and the model might be predicting beam configurations that do not exist!
  • Hard decision process: There’s no way to measure prediction confidence when all you have is a value.
  • Difficult convergence: the domain of possible values is very large, and it’s not easy to tell if a prediction is good or not.


Option 2 – Multitask Classification

We can alternatively build a multitask classification model, where we consider two tasks:

  • whether or not the execution beams in our hypothesis matched a certain beam in stock. This means we will have to create artificial samples in our dataset: 1 positive row and N negative rows, where N is the number of possible beams.
  • Probability of the number of days between today’s date and execution being less than N weeks. This will require generating random dates between order date and execution date. The value of N is determined according to the needs of production and transportation to storage places by our client.

The table below shows an example of what this artificial sampling would look like:

Sampling Date: Randomly sampled dates between order_date and execution_date

Execution Beam Width (hypothesis): The comparison we’re performing. These are the values of beams that are in stock.

Execution Beam Width: What really happened. We use the comparison to “Execution Beam Width (hypothesis)” as a target.

In blue are shown the rows where the target is positive, and in orange where they are negative. For instance, a sampling date of 25/2/2021 is close enough to our execution date to consider it as a positive target for prediction, while 20/2/2021 is not. In terms of execution beams, the target is positive when the pre-orders and the execution matches.

Model Architecture

Regarding model architecture, we can build a two-stream model: we separate the features belonging to the delay between order and execution and the difference between ordered and executed material, since we’ll have multiple rows with similar features, and this tells the model to treat them differently in an explicit way.

The proposed architecture is relatively simple: a set of dense and dropout layers, followed by an aggregation operation (e.g. concatenation). Afterward, another set of Dense/Dropout layers transforms this concatenated latent space. In the bottom, two different softmax layers, one for each task, are added.

Compared to Option 1, this architecture has the advantage of allowing a decision process based on prediction confidence and only predicting items that the client is able to produce. However, the process complexity is higher: you need to create positive and negative training samples, and it is harder to set up.

We can also add custom penalizations in our loss function according to the business problem. If we predict 30 beams in a building that needs 20, it’s ok. If it needs more, it will not be sufficient. When the model doesn’t predict the same material, but a compatible one, we can punish it less. When it’s not, punish it more.

Decision Process

Building a multitask classification model allows us to create a decision process based on the expected value. Namely, what’s the probability P of needing K units of a product has an expected value of P x K units.

To know which materials to keep in stock for the next N weeks, the expected value for all constructions that are ordered and not yet executed can be summed.

Impact Measurement

What metrics would be important to measure?

Internally, for building and evaluating your model, you can use Machine Learning metrics:

  • Classification: PR AUC, ROC AUC, …
  • Regression: Mean Absolute Error, Mean Squared Error…

But this tells nothing about how good the model is at predicting the amount you need to stock. You need to measure business metrics as well:

  • How many products did we predict/produce in excess because no one purchased them
  • How many products didn’t we predict/produce on time, leading to an extra delay in the construction


This article has shown some different ways you can think about product forecasting problems, where there are a lot of products with similar characteristics.

We only cover the specific case of forecasting a single product type (beams) with different characteristics. However, this could be generalized for different products – such as the amount of cement needed – by adapting the model.  Since there are no “cement SKUs”, and any amount predicted is valid, you can replace the sigmoid classification with a linear layer, and create a regression model together with binary classification for the time delay.

Like this story?

Subscribe to Our Newsletter

Special offers, latest news and quality content in your inbox once per month.

Signup single post

This field is for validation purposes and should be left unchanged.

Recommended Articles

Link to Leaders Awarded NILG.AI Startup of the Month

NILG.AI is Startup of the Month Link to Leaders awarded NILG.AI the Startup of the Month (check news). Beta-i nominated us after winning two of their Open Innovation challenges: VOXPOP Urban Mobility Initiatives and Re-Source. AI with Geospatial Data At VOXPOP, NILG.AI built an AI-based mobility index for wheelchair users for the municipality of Lisbon […]

Read More
Can Machine Learning Revolutionize Your Business?

Today, the buzz around machine learning (ML) is louder than ever. But what is it exactly, and more importantly, can it revolutionize your business? In essence, ML is a technology that empowers machines to learn from data, improve over time, and make predictive decisions. It has the potential to redefine how businesses operate. In this […]

Read More
Can ‘Old but Gold’ Predictions Minimize AI Costs?

There’s a common pattern in artificial intelligence (AI) where large corporations build massive infrastructures to support their AI use cases. The goal is to make quick predictions and constantly update with new data to scale up your infrastructure. However, this approach often overlooks the trade-off between infrastructure cost and the size of the opportunity that […]

Read More