About Me Blog
A tutorial on tidy cross-validation with R Analyzing NetHack data, part 1: What kills the players Analyzing NetHack data, part 2: What players kill the most Building a shiny app to explore historical newspapers: a step-by-step guide Classification of historical newspapers content: a tutorial combining R, bash and Vowpal Wabbit, part 1 Classification of historical newspapers content: a tutorial combining R, bash and Vowpal Wabbit, part 2 Curly-Curly, the successor of Bang-Bang Dealing with heteroskedasticity; regression with robust standard errors using R Easy time-series prediction with R: a tutorial with air traffic data from Lux Airport Exporting editable plots from R to Powerpoint: making ggplot2 purrr with officer Fast food, causality and R packages, part 1 Fast food, causality and R packages, part 2 For posterity: install {xml2} on GNU/Linux distros Forecasting my weight with R From webscraping data to releasing it as an R package to share with the world: a full tutorial with data from NetHack Get text from pdfs or images using OCR: a tutorial with {tesseract} and {magick} Getting data from pdfs using the pdftools package Getting the data from the Luxembourguish elections out of Excel Going from a human readable Excel file to a machine-readable csv with {tidyxl} Historical newspaper scraping with {tesseract} and R How Luxembourguish residents spend their time: a small {flexdashboard} demo using the Time use survey data Imputing missing values in parallel using {furrr} Intermittent demand, Croston and Die Hard Looking into 19th century ads from a Luxembourguish newspaper with R Making sense of the METS and ALTO XML standards Manipulate dates easily with {lubridate} Manipulating strings with the {stringr} package Maps with pie charts on top of each administrative division: an example with Luxembourg's elections data Missing data imputation and instrumental variables regression: the tidy approach Objects types and some useful R functions for beginners Pivoting data frames just got easier thanks to `pivot_wide()` and `pivot_long()` R or Python? Why not both? Using Anaconda Python within R with {reticulate} Searching for the optimal hyper-parameters of an ARIMA model in parallel: the tidy gridsearch approach Some fun with {gganimate} The best way to visit Luxembourguish castles is doing data science + combinatorial optimization The never-ending editor war (?) The year of the GNU+Linux desktop is upon us: using user ratings of Steam Play compatibility to play around with regex and the tidyverse Using Data Science to read 10 years of Luxembourguish newspapers from the 19th century Using a genetic algorithm for the hyperparameter optimization of a SARIMA model Using cosine similarity to find matching documents: a tutorial using Seneca's letters to his friend Lucilius Using the tidyverse for more than data manipulation: estimating pi with Monte Carlo methods What hyper-parameters are, and what to do with them; an illustration with ridge regression {pmice}, an experimental package for missing data imputation in parallel using {mice} and {furrr} Building formulae Functional peace of mind Get basic summary statistics for all the variables in a data frame Getting {sparklyr}, {h2o}, {rsparkling} to work together and some fun with bash Importing 30GB of data into R with sparklyr Introducing brotools It's lists all the way down It's lists all the way down, part 2: We need to go deeper Keep trying that api call with purrr::possibly() Lesser known dplyr 0.7* tricks Lesser known dplyr tricks Lesser known purrr tricks Make ggplot2 purrr Mapping a list of functions to a list of datasets with a list of columns as arguments Predicting job search by training a random forest on an unbalanced dataset Teaching the tidyverse to beginners Why I find tidyeval useful tidyr::spread() and dplyr::rename_at() in action Easy peasy STATA-like marginal effects with R Functional programming and unit testing for data munging with R available on Leanpub How to use jailbreakr My free book has a cover! Work on lists of datasets instead of individual datasets by using functional programming Method of Simulated Moments with R New website! Nonlinear Gmm with R - Example with a logistic regression Simulated Maximum Likelihood with R Bootstrapping standard errors for difference-in-differences estimation with R Careful with tryCatch Data frame columns as arguments to dplyr functions Export R output to a file I've started writing a 'book': Functional programming and unit testing for data munging with R Introduction to programming econometrics with R Merge a list of datasets together Object Oriented Programming with R: An example with a Cournot duopoly R, R with Atlas, R with OpenBLAS and Revolution R Open: which is fastest? Read a lot of datasets at once with R Unit testing with R Update to Introduction to programming econometrics with R Using R as a Computer Algebra System with Ryacas

Data frame columns as arguments to dplyr functions

Suppose that you would like to create a function which does a series of computations on a data frame. You would like to pass a column as this function’s argument. Something like:

data(cars)
convertToKmh <- function(dataset, col_name){
  dataset$col_name <- dataset$speed * 1.609344
  return(dataset)
}

This example is obviously not very interesting (you don’t need a function for this), but it will illustrate the point. You would like to append a column called speed_in_kmh with the speed in kilometers per hour to this dataset, but this is what happens:

head(convertToKmh(cars, "speed_in_kmh"))
##   speed dist  col_name
1     4    2  6.437376
2     4   10  6.437376
3     7    4 11.265408
4     7   22 11.265408
5     8   16 12.874752
6     9   10 14.484096

Your column is not called speed_in_kmh but col_name! It turns out that there is a very simple solution:

convertToKmh <- function(dataset, col_name){
  dataset[col_name] <- dataset$speed * 1.609344
  return(dataset)
}

head(convertToKmh(cars, "speed_in_kmh"))

##   speed dist speed_in_kmh
1     4    2     6.437376
2     4   10     6.437376
3     7    4    11.265408
4     7   22    11.265408
5     8   16    12.874752
6     9   10    14.484096

You can access columns with [] instead of $.

But sometimes you want to do more complex things and for example have a function that groups by a variable and then computes new variables, filters by another and so on. You would like to avoid having to hard code these variables in your function, because then why write a function and of course you would like to use dplyr to do it.

I often use dplyr functions in my functions. For illustration purposes, consider this very simple function:

simpleFunction <- function(dataset, col_name){
  require("dplyr")
  dataset %>%
    group_by(col_name) %>%
    summarise(mean_speed = mean(speed)) -> dataset
  return(dataset)
}

simpleFunction(cars, "dist")

This function takes a dataset as an argument, as well as a column name. However, this does not work. You get this error:

Error: unknown variable to group by : col_name 

The variable col_name is passed to simpleFunction() as a string, but group_by() requires a variable name. So why not try to convert col_name to a name?

simpleFunction <- function(dataset, col_name){
  require("dplyr")
  col_name <- as.name(col_name)
  dataset %>%
    group_by(col_name) %>%
    summarise(mean_speed = mean(speed)) -> dataset
  return(dataset)
}

simpleFunction(cars, "dist")

You get the same error as before:

Error: unknown variable to group by : col_name 

So how can you pass a column name to group_by()? Well, there is another version of group_by() called group_by_() that uses standard evaluation. You can learn more about it here. Let’s take a look at what happens when we use group_by_():

simpleFunction <- function(dataset, col_name){
  require("dplyr")
  dataset %>%
    group_by_(col_name) %>%
    summarise(mean_speed = mean(speed)) -> dataset
  return(dataset)
}

simpleFunction(cars, "dist")

A tibble: 35 x 2
 dist mean_speed
<dbl>      <dbl>
1      2        4.0
2      4        7.0
3     10        6.5
4     14       12.0
5     16        8.0
6     17       11.0
7     18       10.0
8     20       13.5
9     22        7.0
10    24       12.0
 … with 25 more rows

We can even use a formula instead of a string:

simpleFunction(cars, ~dist)
 A tibble: 35 x 2
    dist mean_speed
   <dbl>      <dbl>
1      2        4.0
2      4        7.0
3     10        6.5
4     14       12.0
5     16        8.0
6     17       11.0
7     18       10.0
8     20       13.5
9     22        7.0
10    24       12.0
… with 25 more rows

What if you want to pass column names and constants, for example to filter without hardcoding anything?

Trying to do it naively will only yield pain and despair:

simpleFunction <- function(dataset, col_name, value){
  require("dplyr")
  dataset %>%
    filter_(col_name == value) %>%
    summarise(mean_speed = mean(speed)) -> dataset
  return(dataset)
}
> simpleFunction(cars, "dist", 10)

mean_speed 1 NaN

> simpleFunction(cars, dist, 10)

Error in col_name == value : comparison (1) is possible only for atomic and list types

> simpleFunction(cars, ~dist, 10)

mean_speed 1 NaN

To solve this issue, we need to know a little bit about two concepts, lazy evaluation and non-standard evaluation. I recommend you read the following document from Hadley Wickham’s book Advanced R as well as the part on lazy evaluation here.

A nice package called lazyeval can help us out. We would like to make R understand that the column name is not col_name but the string inside it "dist", and now we would like to use filter() for dist equal to 10.

In the lazyeval package, you’ll find the function interp(). interp() allows you to

build an expression up from a mixture of constants and variables.

Take a look at this example:

library(lazyeval)
interp(~x+y, x = 2)
## ~2 + y

What you get back is this nice formula that you can then use within functions. To see why this is useful, let’s look at the above example again, and make it work using interp():

simpleFunction <- function(dataset, col_name, value){
  require("dplyr")
  require("lazyeval")
  filter_criteria <- interp(~y == x, .values=list(y = as.name(col_name), x = value))
  dataset %>%
    filter_(filter_criteria) %>%
    summarise(mean_speed = mean(speed)) -> dataset
  return(dataset)
}

simpleFunction(cars, "dist", 10)

  mean_speed
1        6.5

And now it works! For some reason, you have to pass the column name as a string though.

Sources: apart from the documents above, the following stackoverflow threads helped me out quite a lot: In R: pass column name as argument and use it in function with dplyr::mutate() and lazyeval::interp() and Non-standard evaluation (NSE) in dplyr’s filter_ & pulling data from MySQL.