About Me Blog
A tutorial on tidy cross-validation with R Analyzing NetHack data, part 1: What kills the players Analyzing NetHack data, part 2: What players kill the most Building a shiny app to explore historical newspapers: a step-by-step guide Classification of historical newspapers content: a tutorial combining R, bash and Vowpal Wabbit, part 1 Classification of historical newspapers content: a tutorial combining R, bash and Vowpal Wabbit, part 2 Curly-Curly, the successor of Bang-Bang Dealing with heteroskedasticity; regression with robust standard errors using R Easy time-series prediction with R: a tutorial with air traffic data from Lux Airport Exporting editable plots from R to Powerpoint: making ggplot2 purrr with officer Fast food, causality and R packages, part 1 Fast food, causality and R packages, part 2 For posterity: install {xml2} on GNU/Linux distros Forecasting my weight with R From webscraping data to releasing it as an R package to share with the world: a full tutorial with data from NetHack Get text from pdfs or images using OCR: a tutorial with {tesseract} and {magick} Getting data from pdfs using the pdftools package Getting the data from the Luxembourguish elections out of Excel Going from a human readable Excel file to a machine-readable csv with {tidyxl} Historical newspaper scraping with {tesseract} and R How Luxembourguish residents spend their time: a small {flexdashboard} demo using the Time use survey data Imputing missing values in parallel using {furrr} Intermittent demand, Croston and Die Hard Looking into 19th century ads from a Luxembourguish newspaper with R Making sense of the METS and ALTO XML standards Manipulate dates easily with {lubridate} Manipulating strings with the {stringr} package Maps with pie charts on top of each administrative division: an example with Luxembourg's elections data Missing data imputation and instrumental variables regression: the tidy approach Objects types and some useful R functions for beginners Pivoting data frames just got easier thanks to `pivot_wide()` and `pivot_long()` R or Python? Why not both? Using Anaconda Python within R with {reticulate} Searching for the optimal hyper-parameters of an ARIMA model in parallel: the tidy gridsearch approach Some fun with {gganimate} The best way to visit Luxembourguish castles is doing data science + combinatorial optimization The never-ending editor war (?) The year of the GNU+Linux desktop is upon us: using user ratings of Steam Play compatibility to play around with regex and the tidyverse Using Data Science to read 10 years of Luxembourguish newspapers from the 19th century Using a genetic algorithm for the hyperparameter optimization of a SARIMA model Using cosine similarity to find matching documents: a tutorial using Seneca's letters to his friend Lucilius Using the tidyverse for more than data manipulation: estimating pi with Monte Carlo methods What hyper-parameters are, and what to do with them; an illustration with ridge regression {pmice}, an experimental package for missing data imputation in parallel using {mice} and {furrr} Building formulae Functional peace of mind Get basic summary statistics for all the variables in a data frame Getting {sparklyr}, {h2o}, {rsparkling} to work together and some fun with bash Importing 30GB of data into R with sparklyr Introducing brotools It's lists all the way down It's lists all the way down, part 2: We need to go deeper Keep trying that api call with purrr::possibly() Lesser known dplyr 0.7* tricks Lesser known dplyr tricks Lesser known purrr tricks Make ggplot2 purrr Mapping a list of functions to a list of datasets with a list of columns as arguments Predicting job search by training a random forest on an unbalanced dataset Teaching the tidyverse to beginners Why I find tidyeval useful tidyr::spread() and dplyr::rename_at() in action Easy peasy STATA-like marginal effects with R Functional programming and unit testing for data munging with R available on Leanpub How to use jailbreakr My free book has a cover! Work on lists of datasets instead of individual datasets by using functional programming Method of Simulated Moments with R New website! Nonlinear Gmm with R - Example with a logistic regression Simulated Maximum Likelihood with R Bootstrapping standard errors for difference-in-differences estimation with R Careful with tryCatch Data frame columns as arguments to dplyr functions Export R output to a file I've started writing a 'book': Functional programming and unit testing for data munging with R Introduction to programming econometrics with R Merge a list of datasets together Object Oriented Programming with R: An example with a Cournot duopoly R, R with Atlas, R with OpenBLAS and Revolution R Open: which is fastest? Read a lot of datasets at once with R Unit testing with R Update to Introduction to programming econometrics with R Using R as a Computer Algebra System with Ryacas

Intermittent demand, Croston and Die Hard

I have recently been confronted to a kind of data set and problem that I was not even aware existed: intermittent demand data. Intermittent demand arises when the demand for a certain good arrives sporadically. Let’s take a look at an example, by analyzing the number of downloads for the {RDieHarder} package:

library(tidyverse)
library(tsintermittent)
library(nnfor)
library(cranlogs)
library(brotools)
rdieharder <- cran_downloads("RDieHarder", from = "2017-01-01")

ggplot(rdieharder) +
  geom_line(aes(y = count, x = date), colour = "#82518c") +
  theme_blog()

Let’s take a look at just one month of data, because the above plot is not very clear, because of the outlier just before 2019… I wonder now, was that on Christmas day?

rdieharder %>%
  filter(count == max(count))
##         date count    package
## 1 2018-12-21   373 RDieHarder

Not exactly on Christmas day, but almost! Anyways, let’s look at one month of data:

january_2018 <- rdieharder %>%
  filter(between(date, as.Date("2018-01-01"), as.Date("2018-02-01")))

ggplot(january_2018) +
  geom_line(aes(y = count, x = date), colour = "#82518c") +
  theme_blog()

Now, it is clear that this will be tricky to forecast. There is no discernible pattern, no trend, no seasonality… nothing that would make it “easy” for a model to learn how to forecast such data.

This is typical intermittent demand data. Specific methods have been developed to forecast such data, the most well-known being Croston, as detailed in this paper. A function to estimate such models is available in the {tsintermittent} package, written by Nikolaos Kourentzes who also wrote another package, {nnfor}, which uses Neural Networks to forecast time series data. I am going to use both to try to forecast the intermittent demand for the {RDieHarder} package for the year 2019.

Let’s first load these packages:

library(tsintermittent)
library(nnfor)

And as usual, split the data into training and testing sets:

train_data <- rdieharder %>%
  filter(date < as.Date("2019-01-01")) %>%
  pull(count) %>%
  ts()

test_data <- rdieharder %>%
  filter(date >= as.Date("2019-01-01"))

Let’s consider three models; a naive one, which simply uses the mean of the training set as the forecast for all future periods, Croston’s method, and finally a Neural Network from the {nnfor} package:

naive_model <- mean(train_data)

croston_model <- crost(train_data, h = 163)

nn_model <- mlp(train_data, reps = 1, hd.auto.type = "cv")
## Warning in preprocess(y, m, lags, keep, difforder, sel.lag,
## allow.det.season, : No inputs left in the network after pre-selection,
## forcing AR(1).
nn_model_forecast <- forecast(nn_model, h = 163)

The crost() function estimates Croston’s model, and the h argument produces the forecast for the next 163 days. mlp() trains a multilayer perceptron, and the hd.auto.type = "cv" argument means that 5-fold cross-validation will be used to find the best number of hidden nodes. I then obtain the forecast using the forecast() function. As you can read from the Warning message above, the Neural Network was replaced by an auto-regressive model, AR(1), because no inputs were left after pre-selection… I am not exactly sure what that means, but if I remove the big outlier from before, this warning message disappears, and a Neural Network is successfully trained.

In order to rank the models, I follow this paper from Rob J. Hyndman, who wrote a very useful book titled Forecasting: Principles and Practice, and use the Mean Absolute Scaled Error, or MASE. You can also read this shorter pdf which also details how to use MASE to measure the accuracy for intermittent demand. Here is the function:

mase <- function(train_ts, test_ts, outsample_forecast){

  naive_insample_forecast <- stats::lag(train_ts)

  insample_mae <- mean(abs(train_ts - naive_insample_forecast), na.rm = TRUE)
  error_outsample <- test_ts - outsample_forecast

  ase <- error_outsample / insample_mae
  mean(abs(ase), na.rm = TRUE)
}

It is now easy to compute the models’ accuracies:

mase(train_data, test_data$count, naive_model)
## [1] 1.764385
mase(train_data, test_data$count, croston_model$component$c.out[1])
## [1] 1.397611
mase(train_data, test_data$count, nn_model_forecast$mean)
## [1] 1.767357

Croston’s method is the one that performs best from the three. Maybe surprisingly, the naive method performs just as well as the Neural Network! (or rather, the AR(1) model) Let’s also plot the predictions with the true values from the test set:

test_data <- test_data %>%
  mutate(naive_model_forecast = naive_model,
         croston_model_forecast = croston_model$component$c.out[1],
         nn_model_forecast = nn_model_forecast$mean) %>%
  select(-package) %>%
  rename(actual_value = count)


test_data_longer <- test_data %>%
  gather(models, value,
         actual_value, naive_model_forecast, croston_model_forecast, nn_model_forecast)
## Warning: attributes are not identical across measure variables;
## they will be dropped
ggplot(test_data_longer) +
  geom_line(aes(y = value, x = date, colour = models)) +
  theme_blog()

Just to make sure I didn’t make a mistake when writing the mase() function, let’s use the accuracy() function from the {forecast} package and compare the result for the Neural Network:

library(forecast)
accuracy(nn_model_forecast, x = test_data$actual_value)
##                       ME     RMSE      MAE  MPE MAPE      MASE       ACF1
## Training set 0.001929409 14.81196 4.109577  NaN  Inf 0.8437033 0.05425074
## Test set     8.211758227 12.40199 8.635563 -Inf  Inf 1.7673570         NA

The result is the same, so it does seem like the naive method is not that bad, actually! Now, in general, intermittent demand series have a lot of 0 values, which is not really the case here. I still think that the methodology fits to this particular data set.

How else would you have forecast this data? Let me know via twitter!

Hope you enjoyed! If you found this blog post useful, you might want to follow me on twitter for blog post updates and buy me an espresso or paypal.me.

Buy me an EspressoBuy me an Espresso