About Me Blog
A tutorial on tidy cross-validation with R Analyzing NetHack data, part 1: What kills the players Analyzing NetHack data, part 2: What players kill the most Building a shiny app to explore historical newspapers: a step-by-step guide Classification of historical newspapers content: a tutorial combining R, bash and Vowpal Wabbit, part 1 Classification of historical newspapers content: a tutorial combining R, bash and Vowpal Wabbit, part 2 Curly-Curly, the successor of Bang-Bang Dealing with heteroskedasticity; regression with robust standard errors using R Easy time-series prediction with R: a tutorial with air traffic data from Lux Airport Exporting editable plots from R to Powerpoint: making ggplot2 purrr with officer Fast food, causality and R packages, part 1 Fast food, causality and R packages, part 2 For posterity: install {xml2} on GNU/Linux distros Forecasting my weight with R From webscraping data to releasing it as an R package to share with the world: a full tutorial with data from NetHack Get text from pdfs or images using OCR: a tutorial with {tesseract} and {magick} Getting data from pdfs using the pdftools package Getting the data from the Luxembourguish elections out of Excel Going from a human readable Excel file to a machine-readable csv with {tidyxl} Historical newspaper scraping with {tesseract} and R How Luxembourguish residents spend their time: a small {flexdashboard} demo using the Time use survey data Imputing missing values in parallel using {furrr} Intermittent demand, Croston and Die Hard Looking into 19th century ads from a Luxembourguish newspaper with R Making sense of the METS and ALTO XML standards Manipulate dates easily with {lubridate} Manipulating strings with the {stringr} package Maps with pie charts on top of each administrative division: an example with Luxembourg's elections data Missing data imputation and instrumental variables regression: the tidy approach Modern R with the tidyverse is available on Leanpub Objects types and some useful R functions for beginners Pivoting data frames just got easier thanks to `pivot_wide()` and `pivot_long()` R or Python? Why not both? Using Anaconda Python within R with {reticulate} Searching for the optimal hyper-parameters of an ARIMA model in parallel: the tidy gridsearch approach Some fun with {gganimate} Statistical matching, or when one single data source is not enough The best way to visit Luxembourguish castles is doing data science + combinatorial optimization The never-ending editor war (?) The year of the GNU+Linux desktop is upon us: using user ratings of Steam Play compatibility to play around with regex and the tidyverse Using Data Science to read 10 years of Luxembourguish newspapers from the 19th century Using a genetic algorithm for the hyperparameter optimization of a SARIMA model Using cosine similarity to find matching documents: a tutorial using Seneca's letters to his friend Lucilius Using linear models with binary dependent variables, a simulation study Using the tidyverse for more than data manipulation: estimating pi with Monte Carlo methods What hyper-parameters are, and what to do with them; an illustration with ridge regression {disk.frame} is epic {pmice}, an experimental package for missing data imputation in parallel using {mice} and {furrr} Building formulae Functional peace of mind Get basic summary statistics for all the variables in a data frame Getting {sparklyr}, {h2o}, {rsparkling} to work together and some fun with bash Importing 30GB of data into R with sparklyr Introducing brotools It's lists all the way down It's lists all the way down, part 2: We need to go deeper Keep trying that api call with purrr::possibly() Lesser known dplyr 0.7* tricks Lesser known dplyr tricks Lesser known purrr tricks Make ggplot2 purrr Mapping a list of functions to a list of datasets with a list of columns as arguments Predicting job search by training a random forest on an unbalanced dataset Teaching the tidyverse to beginners Why I find tidyeval useful tidyr::spread() and dplyr::rename_at() in action Easy peasy STATA-like marginal effects with R Functional programming and unit testing for data munging with R available on Leanpub How to use jailbreakr My free book has a cover! Work on lists of datasets instead of individual datasets by using functional programming Method of Simulated Moments with R New website! Nonlinear Gmm with R - Example with a logistic regression Simulated Maximum Likelihood with R Bootstrapping standard errors for difference-in-differences estimation with R Careful with tryCatch Data frame columns as arguments to dplyr functions Export R output to a file I've started writing a 'book': Functional programming and unit testing for data munging with R Introduction to programming econometrics with R Merge a list of datasets together Object Oriented Programming with R: An example with a Cournot duopoly R, R with Atlas, R with OpenBLAS and Revolution R Open: which is fastest? Read a lot of datasets at once with R Unit testing with R Update to Introduction to programming econometrics with R Using R as a Computer Algebra System with Ryacas

Using linear models with binary dependent variables, a simulation study

This blog post is an excerpt of my ebook Modern R with the tidyverse that you can read for free here. This is taken from Chapter 8, in which I discuss advanced functional programming methods for modeling.

As written just above (note: as written above in the book), map() simply applies a function to a list of inputs, and in the previous section we mapped ggplot() to generate many plots at once. This approach can also be used to map any modeling functions, for instance lm() to a list of datasets.

For instance, suppose that you wish to perform a Monte Carlo simulation. Suppose that you are dealing with a binary choice problem; usually, you would use a logistic regression for this.

However, in certain disciplines, especially in the social sciences, the so-called Linear Probability Model is often used as well. The LPM is a simple linear regression, but unlike the standard setting of a linear regression, the dependent variable, or target, is a binary variable, and not a continuous variable. Before you yell “Wait, that’s illegal”, you should know that in practice LPMs do a good job of estimating marginal effects, which is what social scientists and econometricians are often interested in. Marginal effects are another way of interpreting models, giving how the outcome (or the target) changes given a change in a independent variable (or a feature). For instance, a marginal effect of 0.10 for age would mean that probability of success would increase by 10% for each added year of age.

There has been a lot of discussion on logistic regression vs LPMs, and there are pros and cons of using LPMs. Micro-econometricians are still fond of LPMs, even though the pros of LPMs are not really convincing. However, quoting Angrist and Pischke:

“While a nonlinear model may fit the CEF (population conditional expectation function) for LDVs (limited dependent variables) more closely than a linear model, when it comes to marginal effects, this probably matters little” (source: Mostly Harmless Econometrics)

so LPMs are still used for estimating marginal effects.

Let us check this assessment with one example. First, we simulate some data, then run a logistic regression and compute the marginal effects, and then compare with a LPM:

set.seed(1234)
x1 <- rnorm(100)
x2 <- rnorm(100)
  
z <- .5 + 2*x1 + 4*x2

p <- 1/(1 + exp(-z))

y <- rbinom(100, 1, p)

df <- tibble(y = y, x1 = x1, x2 = x2)

This data generating process generates data from a binary choice model. Fitting the model using a logistic regression allows us to recover the structural parameters:

logistic_regression <- glm(y ~ ., data = df, family = binomial(link = "logit"))

Let’s see a summary of the model fit:

summary(logistic_regression)
## 
## Call:
## glm(formula = y ~ ., family = binomial(link = "logit"), data = df)
## 
## Deviance Residuals: 
##      Min        1Q    Median        3Q       Max  
## -2.91941  -0.44872   0.00038   0.42843   2.55426  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept)   0.0960     0.3293   0.292 0.770630    
## x1            1.6625     0.4628   3.592 0.000328 ***
## x2            3.6582     0.8059   4.539 5.64e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 138.629  on 99  degrees of freedom
## Residual deviance:  60.576  on 97  degrees of freedom
## AIC: 66.576
## 
## Number of Fisher Scoring iterations: 7

We do recover the parameters that generated the data, but what about the marginal effects? We can get the marginal effects easily using the {margins} package:

library(margins)

margins(logistic_regression)
## Average marginal effects
## glm(formula = y ~ ., family = binomial(link = "logit"), data = df)
##      x1     x2
##  0.1598 0.3516

Or, even better, we can compute the true marginal effects, since we know the data generating process:

meffects <- function(dataset, coefs){
  X <- dataset %>% 
  select(-y) %>% 
  as.matrix()
  
  dydx_x1 <- mean(dlogis(X%*%c(coefs[2], coefs[3]))*coefs[2])
  dydx_x2 <- mean(dlogis(X%*%c(coefs[2], coefs[3]))*coefs[3])
  
  tribble(~term, ~true_effect,
          "x1", dydx_x1,
          "x2", dydx_x2)
}

(true_meffects <- meffects(df, c(0.5, 2, 4)))
## # A tibble: 2 x 2
##   term  true_effect
##   <chr>       <dbl>
## 1 x1          0.175
## 2 x2          0.350

Ok, so now what about using this infamous Linear Probability Model to estimate the marginal effects?

lpm <- lm(y ~ ., data = df)

summary(lpm)
## 
## Call:
## lm(formula = y ~ ., data = df)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.83953 -0.31588 -0.02885  0.28774  0.77407 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  0.51340    0.03587  14.314  < 2e-16 ***
## x1           0.16771    0.03545   4.732 7.58e-06 ***
## x2           0.31250    0.03449   9.060 1.43e-14 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.3541 on 97 degrees of freedom
## Multiple R-squared:  0.5135, Adjusted R-squared:  0.5034 
## F-statistic: 51.18 on 2 and 97 DF,  p-value: 6.693e-16

It’s not too bad, but maybe it could have been better in other circumstances. Perhaps if we had more observations, or perhaps for a different set of structural parameters the results of the LPM would have been closer. The LPM estimates the marginal effect of x1 to be 0.1677134 vs 0.1597956 for the logistic regression and for x2, the LPM estimation is 0.3124966 vs 0.351607. The true marginal effects are 0.1750963 and 0.3501926 for x1 and x2 respectively.

Just as to assess the accuracy of a model data scientists perform cross-validation, a Monte Carlo study can be performed to asses how close the estimation of the marginal effects using a LPM is to the marginal effects derived from a logistic regression. It will allow us to test with datasets of different sizes, and generated using different structural parameters.

First, let’s write a function that generates data. The function below generates 10 datasets of size 100 (the code is inspired by this StackExchange answer):

generate_datasets <- function(coefs = c(.5, 2, 4), sample_size = 100, repeats = 10){

  generate_one_dataset <- function(coefs, sample_size){
  x1 <- rnorm(sample_size)
  x2 <- rnorm(sample_size)
  
  z <- coefs[1] + coefs[2]*x1 + coefs[3]*x2

  p <- 1/(1 + exp(-z))

  y <- rbinom(sample_size, 1, p)

  df <- tibble(y = y, x1 = x1, x2 = x2)
  }

  simulations <- rerun(.n = repeats, generate_one_dataset(coefs, sample_size))
 
  tibble("coefs" = list(coefs), "sample_size" = sample_size, "repeats" = repeats, "simulations" = list(simulations))
}

Let’s first generate one dataset:

one_dataset <- generate_datasets(repeats = 1)

Let’s take a look at one_dataset:

one_dataset
## # A tibble: 1 x 4
##   coefs     sample_size repeats simulations
##   <list>          <dbl>   <dbl> <list>     
## 1 <dbl [3]>         100       1 <list [1]>

As you can see, the tibble with the simulated data is inside a list-column called simulations. Let’s take a closer look:

str(one_dataset$simulations)
## List of 1
##  $ :List of 1
##   ..$ :Classes 'tbl_df', 'tbl' and 'data.frame': 100 obs. of  3 variables:
##   .. ..$ y : int [1:100] 0 1 1 1 0 1 1 0 0 1 ...
##   .. ..$ x1: num [1:100] 0.437 1.06 0.452 0.663 -1.136 ...
##   .. ..$ x2: num [1:100] -2.316 0.562 -0.784 -0.226 -1.587 ...

The structure is quite complex, and it’s important to understand this, because it will have an impact on the next lines of code; it is a list, containing a list, containing a dataset! No worries though, we can still map over the datasets directly, by using modify_depth() instead of map().

Now, let’s fit a LPM and compare the estimation of the marginal effects with the true marginal effects. In order to have some confidence in our results, we will not simply run a linear regression on that single dataset, but will instead simulate hundreds, then thousands and ten of thousands of data sets, get the marginal effects and compare them to the true ones (but here I won’t simulate more than 500 datasets).

Let’s first generate 10 datasets:

many_datasets <- generate_datasets()

Now comes the tricky part. I have this object, many_datasets looking like this:

many_datasets
## # A tibble: 1 x 4
##   coefs     sample_size repeats simulations
##   <list>          <dbl>   <dbl> <list>     
## 1 <dbl [3]>         100      10 <list [10]>

I would like to fit LPMs to the 10 datasets. For this, I will need to use all the power of functional programming and the {tidyverse}. I will be adding columns to this data frame using mutate() and mapping over the simulations list-column using modify_depth(). The list of data frames is at the second level (remember, it’s a list containing a list containing data frames).

I’ll start by fitting the LPMs, then using broom::tidy() I will get a nice data frame of the estimated parameters. I will then only select what I need, and then bind the rows of all the data frames. I will do the same for the true marginal effects.

I highly suggest that you run the following lines, one after another. It is complicated to understand what’s going on if you are not used to such workflows. However, I hope to convince you that once it will click, it’ll be much more intuitive than doing all this inside a loop. Here’s the code:

results <- many_datasets %>% 
  mutate(lpm = modify_depth(simulations, 2, ~lm(y ~ ., data = .x))) %>% 
  mutate(lpm = modify_depth(lpm, 2, broom::tidy)) %>% 
  mutate(lpm = modify_depth(lpm, 2, ~select(., term, estimate))) %>% 
  mutate(lpm = modify_depth(lpm, 2, ~filter(., term != "(Intercept)"))) %>% 
  mutate(lpm = map(lpm, bind_rows)) %>% 
  mutate(true_effect = modify_depth(simulations, 2, ~meffects(., coefs = coefs[[1]]))) %>% 
  mutate(true_effect = map(true_effect, bind_rows))

This is how results looks like:

results
## # A tibble: 1 x 6
##   coefs     sample_size repeats simulations lpm             true_effect    
##   <list>          <dbl>   <dbl> <list>      <list>          <list>         
## 1 <dbl [3]>         100      10 <list [10]> <tibble [20 × … <tibble [20 × …

Let’s take a closer look to the lpm and true_effect columns:

results$lpm
## [[1]]
## # A tibble: 20 x 2
##    term  estimate
##    <chr>    <dbl>
##  1 x1       0.228
##  2 x2       0.353
##  3 x1       0.180
##  4 x2       0.361
##  5 x1       0.165
##  6 x2       0.374
##  7 x1       0.182
##  8 x2       0.358
##  9 x1       0.125
## 10 x2       0.345
## 11 x1       0.171
## 12 x2       0.331
## 13 x1       0.122
## 14 x2       0.309
## 15 x1       0.129
## 16 x2       0.332
## 17 x1       0.102
## 18 x2       0.374
## 19 x1       0.176
## 20 x2       0.410
results$true_effect
## [[1]]
## # A tibble: 20 x 2
##    term  true_effect
##    <chr>       <dbl>
##  1 x1          0.183
##  2 x2          0.366
##  3 x1          0.166
##  4 x2          0.331
##  5 x1          0.174
##  6 x2          0.348
##  7 x1          0.169
##  8 x2          0.339
##  9 x1          0.167
## 10 x2          0.335
## 11 x1          0.173
## 12 x2          0.345
## 13 x1          0.157
## 14 x2          0.314
## 15 x1          0.170
## 16 x2          0.340
## 17 x1          0.182
## 18 x2          0.365
## 19 x1          0.161
## 20 x2          0.321

Let’s bind the columns, and compute the difference between the true and estimated marginal effects:

simulation_results <- results %>% 
  mutate(difference = map2(.x = lpm, .y = true_effect, bind_cols)) %>% 
  mutate(difference = map(difference, ~mutate(., difference = true_effect - estimate))) %>% 
  mutate(difference = map(difference, ~select(., term, difference))) %>% 
  pull(difference) %>% 
  .[[1]]

Let’s take a look at the simulation results:

simulation_results %>% 
  group_by(term) %>% 
  summarise(mean = mean(difference), 
            sd = sd(difference))
## # A tibble: 2 x 3
##   term     mean     sd
##   <chr>   <dbl>  <dbl>
## 1 x1     0.0122 0.0370
## 2 x2    -0.0141 0.0306

Already with only 10 simulated datasets, the difference in means is not significant. Let’s rerun the analysis, but for difference sizes. In order to make things easier, we can put all the code into a nifty function:

monte_carlo <- function(coefs, sample_size, repeats){
  many_datasets <- generate_datasets(coefs, sample_size, repeats)
  
  results <- many_datasets %>% 
    mutate(lpm = modify_depth(simulations, 2, ~lm(y ~ ., data = .x))) %>% 
    mutate(lpm = modify_depth(lpm, 2, broom::tidy)) %>% 
    mutate(lpm = modify_depth(lpm, 2, ~select(., term, estimate))) %>% 
    mutate(lpm = modify_depth(lpm, 2, ~filter(., term != "(Intercept)"))) %>% 
    mutate(lpm = map(lpm, bind_rows)) %>% 
    mutate(true_effect = modify_depth(simulations, 2, ~meffects(., coefs = coefs[[1]]))) %>% 
    mutate(true_effect = map(true_effect, bind_rows))

  simulation_results <- results %>% 
    mutate(difference = map2(.x = lpm, .y = true_effect, bind_cols)) %>% 
    mutate(difference = map(difference, ~mutate(., difference = true_effect - estimate))) %>% 
    mutate(difference = map(difference, ~select(., term, difference))) %>% 
    pull(difference) %>% 
    .[[1]]

  simulation_results %>% 
    group_by(term) %>% 
    summarise(mean = mean(difference), 
              sd = sd(difference))
}

And now, let’s run the simulation for different parameters and sizes:

monte_carlo(c(.5, 2, 4), 100, 10)
## # A tibble: 2 x 3
##   term      mean     sd
##   <chr>    <dbl>  <dbl>
## 1 x1    -0.00826 0.0291
## 2 x2    -0.00732 0.0412
monte_carlo(c(.5, 2, 4), 100, 100)
## # A tibble: 2 x 3
##   term     mean     sd
##   <chr>   <dbl>  <dbl>
## 1 x1    0.00360 0.0392
## 2 x2    0.00517 0.0446
monte_carlo(c(.5, 2, 4), 100, 500)
## # A tibble: 2 x 3
##   term       mean     sd
##   <chr>     <dbl>  <dbl>
## 1 x1    -0.00152  0.0371
## 2 x2    -0.000701 0.0423
monte_carlo(c(pi, 6, 9), 100, 10)
## # A tibble: 2 x 3
##   term      mean     sd
##   <chr>    <dbl>  <dbl>
## 1 x1    -0.00829 0.0546
## 2 x2     0.00178 0.0370
monte_carlo(c(pi, 6, 9), 100, 100)
## # A tibble: 2 x 3
##   term     mean     sd
##   <chr>   <dbl>  <dbl>
## 1 x1    0.0107  0.0608
## 2 x2    0.00831 0.0804
monte_carlo(c(pi, 6, 9), 100, 500)
## # A tibble: 2 x 3
##   term     mean     sd
##   <chr>   <dbl>  <dbl>
## 1 x1    0.00879 0.0522
## 2 x2    0.0113  0.0668

We see that, at least for this set of parameters, the LPM does a good job of estimating marginal effects.

Now, this study might in itself not be very interesting to you, but I believe the general approach is quite useful and flexible enough to be adapted to all kinds of use-cases.

Hope you enjoyed! If you found this blog post useful, you might want to follow me on twitter for blog post updates and buy me an espresso or paypal.me.

Buy me an EspressoBuy me an Espresso