About Me Blog
A tutorial on tidy cross-validation with R Analyzing NetHack data, part 1: What kills the players Analyzing NetHack data, part 2: What players kill the most Building a shiny app to explore historical newspapers: a step-by-step guide Classification of historical newspapers content: a tutorial combining R, bash and Vowpal Wabbit, part 1 Classification of historical newspapers content: a tutorial combining R, bash and Vowpal Wabbit, part 2 Curly-Curly, the successor of Bang-Bang Dealing with heteroskedasticity; regression with robust standard errors using R Easy time-series prediction with R: a tutorial with air traffic data from Lux Airport Exporting editable plots from R to Powerpoint: making ggplot2 purrr with officer Fast food, causality and R packages, part 1 Fast food, causality and R packages, part 2 For posterity: install {xml2} on GNU/Linux distros Forecasting my weight with R From webscraping data to releasing it as an R package to share with the world: a full tutorial with data from NetHack Get text from pdfs or images using OCR: a tutorial with {tesseract} and {magick} Getting data from pdfs using the pdftools package Getting the data from the Luxembourguish elections out of Excel Going from a human readable Excel file to a machine-readable csv with {tidyxl} Historical newspaper scraping with {tesseract} and R How Luxembourguish residents spend their time: a small {flexdashboard} demo using the Time use survey data Imputing missing values in parallel using {furrr} Intermittent demand, Croston and Die Hard Looking into 19th century ads from a Luxembourguish newspaper with R Making sense of the METS and ALTO XML standards Manipulate dates easily with {lubridate} Manipulating strings with the {stringr} package Maps with pie charts on top of each administrative division: an example with Luxembourg's elections data Missing data imputation and instrumental variables regression: the tidy approach Modern R with the tidyverse is available on Leanpub Objects types and some useful R functions for beginners Pivoting data frames just got easier thanks to `pivot_wide()` and `pivot_long()` R or Python? Why not both? Using Anaconda Python within R with {reticulate} Searching for the optimal hyper-parameters of an ARIMA model in parallel: the tidy gridsearch approach Some fun with {gganimate} Split-apply-combine for Maximum Likelihood Estimation of a linear model Statistical matching, or when one single data source is not enough The best way to visit Luxembourguish castles is doing data science + combinatorial optimization The never-ending editor war (?) The year of the GNU+Linux desktop is upon us: using user ratings of Steam Play compatibility to play around with regex and the tidyverse Using Data Science to read 10 years of Luxembourguish newspapers from the 19th century Using a genetic algorithm for the hyperparameter optimization of a SARIMA model Using cosine similarity to find matching documents: a tutorial using Seneca's letters to his friend Lucilius Using linear models with binary dependent variables, a simulation study Using the tidyverse for more than data manipulation: estimating pi with Monte Carlo methods What hyper-parameters are, and what to do with them; an illustration with ridge regression {disk.frame} is epic {pmice}, an experimental package for missing data imputation in parallel using {mice} and {furrr} Building formulae Functional peace of mind Get basic summary statistics for all the variables in a data frame Getting {sparklyr}, {h2o}, {rsparkling} to work together and some fun with bash Importing 30GB of data into R with sparklyr Introducing brotools It's lists all the way down It's lists all the way down, part 2: We need to go deeper Keep trying that api call with purrr::possibly() Lesser known dplyr 0.7* tricks Lesser known dplyr tricks Lesser known purrr tricks Make ggplot2 purrr Mapping a list of functions to a list of datasets with a list of columns as arguments Predicting job search by training a random forest on an unbalanced dataset Teaching the tidyverse to beginners Why I find tidyeval useful tidyr::spread() and dplyr::rename_at() in action Easy peasy STATA-like marginal effects with R Functional programming and unit testing for data munging with R available on Leanpub How to use jailbreakr My free book has a cover! Work on lists of datasets instead of individual datasets by using functional programming Method of Simulated Moments with R New website! Nonlinear Gmm with R - Example with a logistic regression Simulated Maximum Likelihood with R Bootstrapping standard errors for difference-in-differences estimation with R Careful with tryCatch Data frame columns as arguments to dplyr functions Export R output to a file I've started writing a 'book': Functional programming and unit testing for data munging with R Introduction to programming econometrics with R Merge a list of datasets together Object Oriented Programming with R: An example with a Cournot duopoly R, R with Atlas, R with OpenBLAS and Revolution R Open: which is fastest? Read a lot of datasets at once with R Unit testing with R Update to Introduction to programming econometrics with R Using R as a Computer Algebra System with Ryacas

Exporting editable plots from R to Powerpoint: making ggplot2 purrr with officer

I was recently confronted to the following problem: creating hundreds of plots that could still be edited by our client. What this meant was that I needed to export the graphs in Excel or Powerpoint or some other such tool that was familiar to the client, and not export the plots directly to pdf or png as I would normally do. I still wanted to use R to do it though, because I could do what I always do to when I need to perform repetitive tasks such as producing hundreds of plots; map over a list of, say, countries, and make one plot per country. This is something I discussed in a previous blog post, Make ggplot2 purrr.

So, after some online seaching, I found the {officer} package. This package allows you to put objects into Microsoft documents. For example, editable plots in a Powerpoint document. This is what I will show in this blog post.

Let’s start by loading the required packages:

library("tidyverse")
library("officer")
library("rvg")

Then, I will use the data from the time use survey, which I discussed in a previous blog post Going from a human readable Excel file to a machine-readable csv with {tidyxl}.

You can download the data here.

Let’s import and prepare it:

time_use <- rio::import("clean_data.csv")


time_use <- time_use %>%
    filter(population %in% c("Male", "Female")) %>%
    filter(activities %in% c("Personal care", "Sleep", "Eating", 
                             "Employment", "Household and family care")) %>%
    group_by(day) %>%
    nest()

I only kept two categories, “Male” and “Female” and 5 activities. Then I grouped by day and nested the data. This is how it looks like:

time_use
## # A tibble: 3 x 2
##   day                         data             
##   <chr>                       <list>           
## 1 Year 2014_Monday til Friday <tibble [10 × 4]>
## 2 Year 2014_Saturday          <tibble [10 × 4]>
## 3 Year 2014_Sunday            <tibble [10 × 4]>

As shown, time_use is a tibble with 2 columns, the first day contains the days, and the second data, is of type list, and each element of these lists are tibbles themselves. Let’s take a look inside one:

time_use$data[1]
## [[1]]
## # A tibble: 10 x 4
##    population activities                time  time_in_minutes
##    <chr>      <chr>                     <chr>           <int>
##  1 Male       Personal care             11:00             660
##  2 Male       Sleep                     08:24             504
##  3 Male       Eating                    01:46             106
##  4 Male       Employment                08:11             491
##  5 Male       Household and family care 01:59             119
##  6 Female     Personal care             11:15             675
##  7 Female     Sleep                     08:27             507
##  8 Female     Eating                    01:48             108
##  9 Female     Employment                06:54             414
## 10 Female     Household and family care 03:49             229

I can now create plots for each of the days with the following code:

my_plots <- time_use %>%
    mutate(plots = map2(.y = day, .x = data, ~ggplot(data = .x) + theme_minimal() +
                       geom_col(aes(y = time_in_minutes, x = activities, fill = population), 
                                position = "dodge") +
                       ggtitle(.y) +
                       ylab("Time in minutes") +
                       xlab("Activities")))

These steps are all detailled in my blog post Make ggplot2 purrr. Let’s take a look at my_plots:

my_plots
## # A tibble: 3 x 3
##   day                         data              plots 
##   <chr>                       <list>            <list>
## 1 Year 2014_Monday til Friday <tibble [10 × 4]> <gg>  
## 2 Year 2014_Saturday          <tibble [10 × 4]> <gg>  
## 3 Year 2014_Sunday            <tibble [10 × 4]> <gg>

The last column, called plots is a list where each element is a plot! We can take a look at one:

my_plots$plots[1]
## [[1]]

Now, this is where I could export these plots as pdfs or pngs. But this is not what I need. I need to export these plots as editable charts for Powerpoint. To do this for one image, I would do the following (as per {officer}’s documentation):

read_pptx() %>%
    add_slide(layout = "Title and Content", master = "Office Theme") %>%
    ph_with_vg(code = print(one_plot), type = "body") %>% 
    print(target = path)

To map this over a list of arguments, I wrote a wrapper:

create_pptx <- function(plot, path){
    if(!file.exists(path)) {
        out <- read_pptx()
    } else {
        out <- read_pptx(path)
    }
    
    out %>%
        add_slide(layout = "Title and Content", master = "Office Theme") %>%
        ph_with_vg(code = print(plot), type = "body") %>% 
        print(target = path)
}

This function takes two arguments, plot and path. plot must be an plot object such as the ones contained inside the plots column of my_plots tibble. path is the path of where I want to save the pptx.

The first lines check if the file exists, if yes, the slides get added to the existing file, if not a new pptx gets created. The rest of the code is very similar to the one from the documentation. Now, to create my pptx I simple need to map over the plots column and provide a path:

map(my_plots$plots, create_pptx, path = "test.pptx")
## Warning in doc_parse_file(con, encoding = encoding, as_html = as_html,
## options = options): Failed to parse QName 'xsi:xmlns:' [202]

## Warning in doc_parse_file(con, encoding = encoding, as_html = as_html,
## options = options): Failed to parse QName 'xsi:xmlns:' [202]

## Warning in doc_parse_file(con, encoding = encoding, as_html = as_html,
## options = options): Failed to parse QName 'xsi:xmlns:' [202]

## Warning in doc_parse_file(con, encoding = encoding, as_html = as_html,
## options = options): Failed to parse QName 'xsi:xmlns:' [202]

## Warning in doc_parse_file(con, encoding = encoding, as_html = as_html,
## options = options): Failed to parse QName 'xsi:xmlns:' [202]

## Warning in doc_parse_file(con, encoding = encoding, as_html = as_html,
## options = options): Failed to parse QName 'xsi:xmlns:' [202]

## Warning in doc_parse_file(con, encoding = encoding, as_html = as_html,
## options = options): Failed to parse QName 'xsi:xmlns:' [202]

## Warning in doc_parse_file(con, encoding = encoding, as_html = as_html,
## options = options): Failed to parse QName 'xsi:xmlns:' [202]

## Warning in doc_parse_file(con, encoding = encoding, as_html = as_html,
## options = options): Failed to parse QName 'xsi:xmlns:' [202]
## [[1]]
## [1] "/home/cbrunos/Documents/b-rodrigues.github.com/content/blog/test.pptx"
## 
## [[2]]
## [1] "/home/cbrunos/Documents/b-rodrigues.github.com/content/blog/test.pptx"
## 
## [[3]]
## [1] "/home/cbrunos/Documents/b-rodrigues.github.com/content/blog/test.pptx"

Here is the end result:

Inside Powerpoint (or in this case Libreoffice), the plots are geometric shapes that can now be edited!

If you found this blog post useful, you might want to follow me on twitter for blog post updates.

Buy me an EspressoBuy me an Espresso