Dealing with heteroskedasticity; regression with robust standard errors using R
Forecasting my weight with R
Getting data from pdfs using the pdftools package
Imputing missing values in parallel using {furrr}
Missing data imputation and instrumental variables regression: the tidy approach
{pmice}, an experimental package for missing data imputation in parallel using {mice} and {furrr}
Building formulae
Functional peace of mind
Get basic summary statistics for all the variables in a data frame
Getting {sparklyr}, {h2o}, {rsparkling} to work together and some fun with bash
Importing 30GB of data into R with sparklyr
Introducing brotools
It's lists all the way down
It's lists all the way down, part 2: We need to go deeper
Keep trying that api call with purrr::possibly()
Lesser known dplyr 0.7* tricks
Lesser known dplyr tricks
Lesser known purrr tricks
Make ggplot2 purrr
Mapping a list of functions to a list of datasets with a list of columns as arguments
Predicting job search by training a random forest on an unbalanced dataset
Teaching the tidyverse to beginners
Why I find tidyeval useful
tidyr::spread() and dplyr::rename_at() in action
Easy peasy STATA-like marginal effects with R
Functional programming and unit testing for data munging with R available on Leanpub
How to use jailbreakr
My free book has a cover!
Work on lists of datasets instead of individual datasets by using functional programming
Method of Simulated Moments with R
New website!
Nonlinear Gmm with R - Example with a logistic regression
Simulated Maximum Likelihood with R
Bootstrapping standard errors for difference-in-differences estimation with R
Careful with tryCatch
Data frame columns as arguments to dplyr functions
Export R output to a file
I've started writing a 'book': Functional programming and unit testing for data munging with R
Introduction to programming econometrics with R
Merge a list of datasets together
Object Oriented Programming with R: An example with a Cournot duopoly
R, R with Atlas, R with OpenBLAS and Revolution R Open: which is fastest?
Read a lot of datasets at once with R
Unit testing with R
Update to Introduction to programming econometrics with R
Using R as a Computer Algebra System with Ryacas

Sometimes it is useful to export the output of a long-running R command. For example, you might want to run a time consuming regression just before leaving work on Friday night, but would like to get the output saved inside your Dropbox folder to take a look at the results before going back to work on Monday.

This can be achieved very easily using `capture.output()`

and `cat()`

like so:

```
out <- capture.output(summary(my_very_time_consuming_regression))
cat("My title", out, file="summary_of_my_very_time_consuming_regression.txt", sep="\n", append=TRUE)
```

`my_very_time_consuming_regression`

is an object of class `lm`

for example. I save the output of `summary(my_very_time_consuming_regression)`

as text using `capture.output`

and save it in a variable called `out`

. Finally, I save `out`

to a file called `summary_of_my_very_time_consuming_regression.txt`

with the first sentence being `My title`

(you can put anything there). The file `summary_of_my_very_time_consuming_regression.txt`

doesn’t have to already exist in your working directory. The option `sep="\n"`

is important or else the whole output will be written in a single line. Finally, `append=TRUE`

makes sure your file won’t be overwritten; additional output will be appended to the file, which can be nice if you want to compare different versions of your model.