About Me Blog
A tutorial on tidy cross-validation with R Analyzing NetHack data, part 1: What kills the players Analyzing NetHack data, part 2: What players kill the most Building a shiny app to explore historical newspapers: a step-by-step guide Classification of historical newspapers content: a tutorial combining R, bash and Vowpal Wabbit, part 1 Classification of historical newspapers content: a tutorial combining R, bash and Vowpal Wabbit, part 2 Dealing with heteroskedasticity; regression with robust standard errors using R Easy time-series prediction with R: a tutorial with air traffic data from Lux Airport Exporting editable plots from R to Powerpoint: making ggplot2 purrr with officer Fast food, causality and R packages, part 1 Fast food, causality and R packages, part 2 For posterity: install {xml2} on GNU/Linux distros Forecasting my weight with R From webscraping data to releasing it as an R package to share with the world: a full tutorial with data from NetHack Get text from pdfs or images using OCR: a tutorial with {tesseract} and {magick} Getting data from pdfs using the pdftools package Getting the data from the Luxembourguish elections out of Excel Going from a human readable Excel file to a machine-readable csv with {tidyxl} Historical newspaper scraping with {tesseract} and R How Luxembourguish residents spend their time: a small {flexdashboard} demo using the Time use survey data Imputing missing values in parallel using {furrr} Looking into 19th century ads from a Luxembourguish newspaper with R Making sense of the METS and ALTO XML standards Manipulate dates easily with {lubridate} Manipulating strings with the {stringr} package Maps with pie charts on top of each administrative division: an example with Luxembourg's elections data Missing data imputation and instrumental variables regression: the tidy approach Objects types and some useful R functions for beginners Pivoting data frames just got easier thanks to `pivot_wide()` and `pivot_long()` R or Python? Why not both? Using Anaconda Python within R with {reticulate} Searching for the optimal hyper-parameters of an ARIMA model in parallel: the tidy gridsearch approach Some fun with {gganimate} The best way to visit Luxembourguish castles is doing data science + combinatorial optimization The never-ending editor war (?) The year of the GNU+Linux desktop is upon us: using user ratings of Steam Play compatibility to play around with regex and the tidyverse Using Data Science to read 10 years of Luxembourguish newspapers from the 19th century Using a genetic algorithm for the hyperparameter optimization of a SARIMA model Using the tidyverse for more than data manipulation: estimating pi with Monte Carlo methods What hyper-parameters are, and what to do with them; an illustration with ridge regression {pmice}, an experimental package for missing data imputation in parallel using {mice} and {furrr} Building formulae Functional peace of mind Get basic summary statistics for all the variables in a data frame Getting {sparklyr}, {h2o}, {rsparkling} to work together and some fun with bash Importing 30GB of data into R with sparklyr Introducing brotools It's lists all the way down It's lists all the way down, part 2: We need to go deeper Keep trying that api call with purrr::possibly() Lesser known dplyr 0.7* tricks Lesser known dplyr tricks Lesser known purrr tricks Make ggplot2 purrr Mapping a list of functions to a list of datasets with a list of columns as arguments Predicting job search by training a random forest on an unbalanced dataset Teaching the tidyverse to beginners Why I find tidyeval useful tidyr::spread() and dplyr::rename_at() in action Easy peasy STATA-like marginal effects with R Functional programming and unit testing for data munging with R available on Leanpub How to use jailbreakr My free book has a cover! Work on lists of datasets instead of individual datasets by using functional programming Method of Simulated Moments with R New website! Nonlinear Gmm with R - Example with a logistic regression Simulated Maximum Likelihood with R Bootstrapping standard errors for difference-in-differences estimation with R Careful with tryCatch Data frame columns as arguments to dplyr functions Export R output to a file I've started writing a 'book': Functional programming and unit testing for data munging with R Introduction to programming econometrics with R Merge a list of datasets together Object Oriented Programming with R: An example with a Cournot duopoly R, R with Atlas, R with OpenBLAS and Revolution R Open: which is fastest? Read a lot of datasets at once with R Unit testing with R Update to Introduction to programming econometrics with R Using R as a Computer Algebra System with Ryacas

Classification of historical newspapers content: a tutorial combining R, bash and Vowpal Wabbit, part 2

In part 1 of this series I set up Vowpal Wabbit to classify newspapers content. Now, let’s use the model to make predictions and see how and if we can improve the model. Then, let’s train the model on the whole data.

Step 1: prepare the data

The first step consists in importing the test data and preparing it. The test data need not be large and thus can be imported and worked on in R.

I need to remove the target column from the test set, or else it will be used to make predictions. If you do not remove this column the accuracy of the model will be very high, but it will be wrong since, of course, you do not have the target column at running time… because it is the column that you want to predict!

library("tidyverse")
library("yardstick")

small_test <- read_delim("data_split/small_test.txt", "|",
                      escape_double = FALSE, col_names = FALSE,
                      trim_ws = TRUE)

small_test %>%
    mutate(X1= " ") %>%
    write_delim("data_split/small_test2.txt", col_names = FALSE, delim = "|")

I wrote the data in a file called small_test2.txt and can now use my model to make predictions:

system2("/home/cbrunos/miniconda3/bin/vw", args = "-t -i vw_models/small_oaa.model data_split/small_test2.txt -p data_split/small_oaa.predict")

The predictions get saved in the file small_oaa.predict, which is a plain text file. Let’s add these predictions to the original test set:

small_predictions <- read_delim("data_split/small_oaa.predict", "|",
                          escape_double = FALSE, col_names = FALSE,
                          trim_ws = TRUE)

small_test <- small_test %>%
    rename(truth = X1) %>%
    mutate(truth = factor(truth, levels = c("1", "2", "3", "4", "5")))

small_predictions <- small_predictions %>%
    rename(predictions = X1) %>%
    mutate(predictions = factor(predictions, levels = c("1", "2", "3", "4", "5")))

small_test <- small_test %>%
    bind_cols(small_predictions)

Step 2: use the model and test data to evaluate performance

We can use the several metrics included in {yardstick} to evaluate the model’s performance:

conf_mat(small_test, truth = truth, estimate = predictions)

accuracy(small_test, truth = truth, estimate = predictions)
          Truth
Prediction  1  2  3  4  5
         1 51 15  2 10  1
         2 11  6  3  1  0
         3  0  0  0  0  0
         4  0  0  0  0  0
         5  0  0  0  0  0
# A tibble: 1 x 3
  .metric  .estimator .estimate
  <chr>    <chr>          <dbl>
1 accuracy multiclass     0.570

We can see that the model never predicted class 3, 4 or 5. Can we improve by adding some regularization? Let’s find out!

Step 3: adding regularization

Before trying regularization, let’s try changing the cost function from the logistic function to the hinge function:

# Train the model
hinge_oaa_fit <- system2("/home/cbrunos/miniconda3/bin/vw", args = "--oaa 5 -d data_split/small_train.txt --loss_function hinge -f vw_models/hinge_oaa.model", stderr = TRUE)

system2("/home/cbrunos/miniconda3/bin/vw", args = "-i vw_models/hinge_oaa.model -t -d data_split/small_test2.txt -p data_split/hinge_oaa.predict")


predictions <- read_delim("data_split/hinge_oaa.predict", "|",
                          escape_double = FALSE, col_names = FALSE,
                          trim_ws = TRUE)

test <- test %>%
    select(-predictions)

predictions <- predictions %>%
    rename(predictions = X1) %>%
    mutate(predictions = factor(predictions, levels = c("1", "2", "3", "4", "5")))

test <- test %>%
    bind_cols(predictions)
conf_mat(test, truth = truth, estimate = predictions)

accuracy(test, truth = truth, estimate = predictions)
          Truth
Prediction   1   2   3   4   5
         1 411 120  45  92   1
         2 355 189  12  17   0
         3  11   2   0   0   0
         4  36   4   0   1   0
         5   3   0   3   0   0
# A tibble: 1 x 3
  .metric  .estimator .estimate
  <chr>    <chr>          <dbl>
1 accuracy multiclass     0.462

Well, didn’t work out so well, but at least we now know how to change the loss function. Let’s go back to the logistic loss and add some regularization. First, let’s train the model:

regul_oaa_fit <- system2("/home/cbrunos/miniconda3/bin/vw", args = "--oaa 5 --l1 0.005 --l2 0.005 -d data_split/small_train.txt -f vw_models/small_regul_oaa.model", stderr = TRUE)

Now we can use it for prediction:

system2("/home/cbrunos/miniconda3/bin/vw", args = "-i vw_models/small_regul_oaa.model -t -d data_split/test2.txt -p data_split/small_regul_oaa.predict")


predictions <- read_delim("data_split/small_regul_oaa.predict", "|",
                          escape_double = FALSE, col_names = FALSE,
                          trim_ws = TRUE)

test <- test %>%
    select(-predictions)

predictions <- predictions %>%
    rename(predictions = X1) %>%
    mutate(predictions = factor(predictions, levels = c("1", "2", "3", "4", "5")))

test <- test %>%
    bind_cols(predictions)

We can now use it for predictions:

conf_mat(test, truth = truth, estimate = predictions)

accuracy(test, truth = truth, estimate = predictions)
          Truth
Prediction   1   2   3   4   5
         1 816 315  60 110   1
         2   0   0   0   0   0
         3   0   0   0   0   0
         4   0   0   0   0   0
         5   0   0   0   0   0
# A tibble: 1 x 3
  .metric  .estimator .estimate
  <chr>    <chr>          <dbl>
1 accuracy multiclass     0.627

So accuracy improved, but the model only predicts class 1 now… let’s try with other hyper-parameters values:

regul_oaa_fit <- system2("/home/cbrunos/miniconda3/bin/vw", args = "--oaa 5 --l1 0.00015 --l2 0.00015 -d data_split/small_train.txt -f vw_models/small_regul_oaa.model", stderr = TRUE)
conf_mat(test, truth = truth, estimate = predictions)

accuracy(test, truth = truth, estimate = predictions)
          Truth
Prediction   1   2   3   4   5
         1 784 300  57 108   1
         2  32  14   3   2   0
         3   0   1   0   0   0
         4   0   0   0   0   0
         5   0   0   0   0   0
# A tibble: 1 x 3
  .metric  .estimator .estimate
  <chr>    <chr>          <dbl>
1 accuracy multiclass     0.613

So accuracy is lower than previously, but at least more categories get correctly predicted. Depending on your needs, you should consider different metrics. Especially for classification problems, you might not be interested in accuracy, in particular if the data is severely unbalanced.

Anyhow, to finish this blog post, let’s train the model on the whole data and measure the time it takes to run the full model.

Step 4: Training on the whole data

Let’s first split the whole data into a training and a testing set:

nb_lines <- system2("cat", args = "text_fr.txt | wc -l", stdout = TRUE)

system2("split", args = paste0("-l", floor(as.numeric(nb_lines)*0.995), " text_fr.txt data_split/"))

system2("mv", args = "data_split/aa data_split/train.txt")
system2("mv", args = "data_split/ab data_split/test.txt")

The whole data contains 260247 lines, and the training set weighs 667MB, which is quite large. Let’s train the simple multiple classifier on the data and see how long it takes:

tic <- Sys.time()
oaa_fit <- system2("/home/cbrunos/miniconda3/bin/vw", args = "--oaa 5 -d data_split/train.txt -f vw_models/oaa.model", stderr = TRUE)
Sys.time() - tic
Time difference of 4.73266 secs

Yep, you read that right. Training the classifier on 667MB of data took less than 5 seconds!

Let’s take a look at the final object:

oaa_fit
 [1] "final_regressor = vw_models/oaa.model"                                   
 [2] "Num weight bits = 18"                                                    
 [3] "learning rate = 0.5"                                                     
 [4] "initial_t = 0"                                                           
 [5] "power_t = 0.5"                                                           
 [6] "using no cache"                                                          
 [7] "Reading datafile = data_split/train.txt"                                 
 [8] "num sources = 1"                                                         
 [9] "average  since         example        example  current  current  current"
[10] "loss     last          counter         weight    label  predict features"
[11] "1.000000 1.000000            1            1.0        2        1      253"
[12] "0.500000 0.000000            2            2.0        2        2      499"
[13] "0.250000 0.000000            4            4.0        2        2        6"
[14] "0.250000 0.250000            8            8.0        1        1     2268"
[15] "0.312500 0.375000           16           16.0        1        1      237"
[16] "0.250000 0.187500           32           32.0        1        1      557"
[17] "0.171875 0.093750           64           64.0        1        1      689"
[18] "0.179688 0.187500          128          128.0        2        2      208"
[19] "0.144531 0.109375          256          256.0        1        1      856"
[20] "0.136719 0.128906          512          512.0        4        4        4"
[21] "0.122070 0.107422         1024         1024.0        1        1     1353"
[22] "0.106934 0.091797         2048         2048.0        1        1      571"
[23] "0.098633 0.090332         4096         4096.0        1        1       43"
[24] "0.080566 0.062500         8192         8192.0        1        1      885"
[25] "0.069336 0.058105        16384        16384.0        1        1      810"
[26] "0.062683 0.056030        32768        32768.0        2        2      467"
[27] "0.058167 0.053650        65536        65536.0        1        1       47"
[28] "0.056061 0.053955       131072       131072.0        1        1      495"
[29] ""                                                                        
[30] "finished run"                                                            
[31] "number of examples = 258945"                                             
[32] "weighted example sum = 258945.000000"                                    
[33] "weighted label sum = 0.000000"                                           
[34] "average loss = 0.054467"                                                 
[35] "total feature number = 116335486"  

Let’s use the test set and see how the model fares:

conf_mat(test, truth = truth, estimate = predictions)

accuracy(test, truth = truth, estimate = predictions)
          Truth
Prediction   1   2   3   4   5
         1 537 175  52 100   1
         2 271 140   8   9   0
         3   1   0   0   0   0
         4   7   0   0   1   0
         5   0   0   0   0   0
# A tibble: 1 x 3
  .metric  .estimator .estimate
  <chr>    <chr>          <dbl>
1 accuracy multiclass     0.521

Better accuracy can certainly be achieved with hyper-parameter tuning… maybe the subject for a future blog post? In any case I am very impressed with Vowpal Wabbit and am certainly looking forward to future developments of {RVowpalWabbit}!

Hope you enjoyed! If you found this blog post useful, you might want to follow me on twitter for blog post updates and buy me an espresso or paypal.me.

Buy me an EspressoBuy me an Espresso