Workflow for statistical analysis and report writing

RStatisticsData Visualization

R Problem Overview


Does anyone have any wisdom on workflows for data analysis related to custom report writing? The use-case is basically this:

  1. Client commissions a report that uses data analysis, e.g. a population estimate and related maps for a water district.

  2. The analyst downloads some data, munges the data and saves the result (e.g. adding a column for population per unit, or subsetting the data based on district boundaries).

  3. The analyst analyzes the data created in (2), gets close to her goal, but sees that needs more data and so goes back to (1).

  4. Rinse repeat until the tables and graphics meet QA/QC and satisfy the client.

  5. Write report incorporating tables and graphics.

  6. Next year, the happy client comes back and wants an update. This should be as simple as updating the upstream data by a new download (e.g. get the building permits from the last year), and pressing a "RECALCULATE" button, unless specifications change.

At the moment, I just start a directory and ad-hoc it the best I can. I would like a more systematic approach, so I am hoping someone has figured this out... I use a mix of spreadsheets, SQL, ARCGIS, R, and Unix tools.

Thanks!

PS:

Below is a basic Makefile that checks for dependencies on various intermediate datasets (w/ .RData suffix) and scripts (.R suffix). Make uses timestamps to check dependencies, so if you touch ss07por.csv, it will see that this file is newer than all the files / targets that depend on it, and execute the given scripts in order to update them accordingly. This is still a work in progress, including a step for putting into SQL database, and a step for a templating language like sweave. Note that Make relies on tabs in its syntax, so read the manual before cutting and pasting. Enjoy and give feedback!

http://www.gnu.org/software/make/manual/html_node/index.html#Top

R=/home/wsprague/R-2.9.2/bin/R

persondata.RData : ImportData.R ../../DATA/ss07por.csv Functions.R $R --slave -f ImportData.R

persondata.Munged.RData : MungeData.R persondata.RData Functions.R $R --slave -f MungeData.R

report.txt: TabulateAndGraph.R persondata.Munged.RData Functions.R $R --slave -f TabulateAndGraph.R > report.txt

R Solutions


Solution 1 - R

I generally break my projects into 4 pieces:

  1. load.R
  2. clean.R
  3. func.R
  4. do.R

load.R: Takes care of loading in all the data required. Typically this is a short file, reading in data from files, URLs and/or ODBC. Depending on the project at this point I'll either write out the workspace using save() or just keep things in memory for the next step.

clean.R: This is where all the ugly stuff lives - taking care of missing values, merging data frames, handling outliers.

func.R: Contains all of the functions needed to perform the actual analysis. source()'ing this file should have no side effects other than loading up the function definitions. This means that you can modify this file and reload it without having to go back an repeat steps 1 & 2 which can take a long time to run for large data sets.

do.R: Calls the functions defined in func.R to perform the analysis and produce charts and tables.

The main motivation for this set up is for working with large data whereby you don't want to have to reload the data each time you make a change to a subsequent step. Also, keeping my code compartmentalized like this means I can come back to a long forgotten project and quickly read load.R and work out what data I need to update, and then look at do.R to work out what analysis was performed.

Solution 2 - R

If you'd like to see some examples, I have a few small (and not so small) data cleaning and analysis projects available online. In most, you'll find a script to download the data, one to clean it up, and a few to do exploration and analysis:

Recently I have started numbering the scripts, so it's completely obvious in which order they should be run. (If I'm feeling really fancy I'll sometimes make it so that the exploration script will call the cleaning script which in turn calls the download script, each doing the minimal work necessary - usually by checking for the presence of output files with file.exists. However, most times this seems like overkill).

I use git for all my projects (a source code management system) so its easy to collaborate with others, see what is changing and easily roll back to previous versions.

If I do a formal report, I usually keep R and latex separate, but I always make sure that I can source my R code to produce all the code and output that I need for the report. For the sorts of reports that I do, I find this easier and cleaner than working with latex.

Solution 3 - R

I agree with the other responders: Sweave is excellent for report writing with R. And rebuilding the report with updated results is as simple as re-calling the Sweave function. It's completely self-contained, including all the analysis, data, etc. And you can version control the whole file.

I use the StatET plugin for Eclipse for developing the reports, and Sweave is integrated (Eclipse recognizes latex formating, etc). On Windows, http://miktex.org/">it's easy to use MikTEX.

I would also add, that http://en.wikipedia.org/wiki/Beamer_(LaTeX)">you can create beautiful reports with Beamer. Creating a normal report is just as simple. I included an example below that pulls data from Yahoo! and creates a chart and a table (using quantmod). You can build this report like so:

Sweave(file = "test.Rnw")

Here's the Beamer document itself:

% 
\documentclass[compress]{beamer}
\usepackage{Sweave}
\usetheme{PaloAlto} 
\begin{document}

\title{test report}
\author{john doe}
\date{September 3, 2009} 

\maketitle

\begin{frame}[fragile]\frametitle{Page 1: chart}

<<echo=FALSE,fig=TRUE,height=4, width=7>>=
library(quantmod)
getSymbols("PFE", from="2009-06-01")
chartSeries(PFE)
@

\end{frame}


\begin{frame}[fragile]\frametitle{Page 2: table}

<<echo=FALSE,results=tex>>=
library(xtable)
xtable(PFE[1:10,1:4], caption = "PFE")
@

\end{frame}

\end{document}

Solution 4 - R

I just wanted to add, in case anyone missed it, that http://learnr.wordpress.com/2009/09/09/brew-creating-repetitive-reports/">there's a great post on the learnr blog about creating repetitive reports with http://cran.r-project.org/web/packages/brew/index.html">Jeffrey Horner's brew package. Matt and Kevin both mentioned brew above. I haven't actually used it much myself.

The entries follows a nice workflow, so it's well worth a read:

  1. Prepare the data.
  2. Prepare the report template.
  3. Produce the report.

Actually producing the report once the first two steps are complete is very simple:

library(tools)
library(brew)
brew("population.brew", "population.tex")
texi2dvi("population.tex", pdf = TRUE)

Solution 5 - R

For creating custom reports, I've found it useful to incorporate many of the existing tips suggested here.

Generating reports: A good strategy for generating reports involves the combination of Sweave, make, and R.

Editor: Good editors for preparing Sweave documents include:

  • StatET and Eclipse
  • Emacs and ESS
  • Vim and Vim-R
  • R Studio

Code organisation: In terms of code organisation, I find two strategies useful:

Solution 6 - R

I use Sweave for the report-producing side of this, but I've also been hearing about the brew package - though I haven't yet looked into it.

Essentially, I have a number of surveys for which I produce summary statistics. Same surveys, same reports every time. I built a Sweave template for the reports (which takes a bit of work). But once the work is done, I have a separate R script that lets me point out the new data. I press "Go", Sweave dumps out a few score .tex files, and I run a little Python script to pdflatex them all. My predecessor spent ~6 weeks each year on these reports; I spend about 3 days (mostly on cleaning data; escape characters are hazardous).

It's very possible that there are better approaches now, but if you do decide to go this route, let me know - I've been meaning to put up some of my Sweave hacks, and that would be a good kick in the pants to do so.

Solution 7 - R

I'm going to suggest something in a different sort of direction from the other submitters, based on the fact that you asked specifically about project workflow, rather than tools. Assuming you're relatively happy with your document-production model, it sounds like your challenges really may be centered more around issues of version tracking, asset management, and review/publishing process.

If that sounds correct, I would suggest looking into an integrated ticketing/source management/documentation tool like Redmine. Keeping related project artifacts such as pending tasks, discussion threads, and versioned data/code files together can be a great help even for projects well outside the traditional "programming" bailiwick.

Solution 8 - R

Agreed that Sweave is the way to go, with xtable for generating LaTeX tables. Although I haven't spent too much time working with them, the recently released tikzDevice package looks really promising, particularly when coupled with pgfSweave (which, as far as I know is only available on rforge.net at this time -- there is a link to r-forge from there, but it's not responding for me at the moment).

Between the two, you'll get consistent formatting between text and figures (fonts, etc.). With brew, these might constitute the holy grail of report generation.

Solution 9 - R

At a more "meta" level, you might be interested in the CRISP-DM process model.

Solution 10 - R

"make" is great because (1) you can use it for all your work in any language (unlike, say, Sweave and Brew), (2) it is very powerful (enough to build all the software on your machine), and (3) it avoids repeating work. This last point is important to me because a lot of the work is slow; when I latex a file, I like to see the result in a few seconds, not the hour it would take to recreate the figures.

Solution 11 - R

I use project templates along with R studio, currently mine contains the following folders:

  • info : pdfs, powerpoints, docs... which won't be used by any script
  • data input : data that will be used by my scripts but not generated by them
  • data output : data generated by my scripts for further use but not as a proper report.
  • reports : Only files that will actually be shown to someone else
  • R : All R scripts
  • SAS : Because I sometimes have to :'(

I wrote custom functions so I can call smart_save(x,y) or smart_load(x) to save or load RDS files to and from the data output folder (files named with variable names) so I'm not bothered by paths during my analysis.

A custom function new_project creates a numbered project folder, copies all the files from the template, renames the RProj file and edits the setwd calls, and set working directory to new project.

All R scripts are in the R folder, structured as follow :


00_main.R
  • setwd
  • calls scripts 1 to 5

00_functions.R
  • All functions and only functions go there, if there's too many I'll separate it into several, all named like 00_functions_something.R, in particular if I plan to make a package out of some of them I'll put them apart

00_explore.R
  • a bunch of script chunks where i'm testing things or exploring my data
  • It's the only file where i'm allowed to be messy.

01_initialize.R
  • Prefilled with a call to a more general initialize_general.R script from my template folder which loads the packages and data I always use and don't mind having in my workspace
  • loads 00_functions.R (prefilled)
  • loads additional libraries
  • set global variables

02_load data.R
  • loads csv/txt xlsx RDS, there's a prefilled commented line for every type of file
  • displays which files hava been created in the workspace

03_pull data from DB.R
  • Uses dbplyr to fetch filtered and grouped tables from the DB
  • some prefilled commented lines to set up connections and fetch.
  • Keep client side operations to bare minimum
  • No server side operations outside of this script
  • Displays which files have been created in the workspace
  • Saves these variables so they can be reloaded faster

Once it's been done once I switch off a query_db boolean and the data will reloaded from RDS next time.

It can happen that I have to refeed data to DBs, If so I'll create additional steps.


04_Build.R
  • Data wrangling, all the fun dplyr / tidyr stuff goes there
  • displays which files have been created in the workspace
  • save these variables

Once it's been done once I switch off a build boolean and the data will reloaded from RDS next time.


05_Analyse.R
  • Summarize, model...
  • report excel and csv files

95_build ppt.R
  • template for powerpoint report using officer

96_prepare markdown.R
  • setwd
  • load data
  • set markdown parameters if needed
  • render

97_prepare shiny.R
  • setwd
  • load data
  • set shiny parameters if needed
  • runApp

98_Markdown report.Rmd
  • A report template

99_Shiny report.Rmd
  • An app template

Solution 12 - R

For writing a quick preliminary report or email to a colleague, I find that it can be very efficient to copy-and-paste plots into MS Word or an email or wiki page -- often best is a bitmapped screenshot (e.g. on mac, Apple-Shift-(Ctrl)-4). I think this is an underrated technique.

For a more final report, writing R functions to easily regenerate all the plots (as files) is very important. It does take more time to code this up.

On the larger workflow issues, I like Hadley's answer on enumerating the code/data files for the cleaning and analysis flow. All of my data analysis projects have a similar structure.

Solution 13 - R

I'll add my voice to sweave. For complicated, multi-step analysis you can use a makefile to specify the different parts. Can prevent having to repeat the whole analysis if just one part has changed.

Solution 14 - R

I also do what Josh Reich does, only I do that creating my personal R-packages, as it helps me structure my code and data, and it is also quite easy to share those with others.

  1. create my package
  2. load
  3. clean
  4. functions
  5. do

creating my package: devtools::create('package_name')

load and clean: I create scripts in the data-raw/ subfolder of my package for loading, cleaning, and storing the resulting data objects in the package using devtools::use_data(object_name). Then I compile the package. From now on, calling library(package_name) makes these data available (and they are not loaded until necessary).

functions: I put the functions for my analyses into the R/ subfolder of my package, and export only those that need to be called from outside (and not the helper functions, which can remain invisible).

do: I create a script that uses the data and functions stored in my package. (If the analyses only need to be done once, I can put this script as well into the data-raw/ subfolder, run it, and store the results in the package to make it easily accessible.)

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionforkandwaitView Question on Stackoverflow
Solution 1 - RJosh ReichView Answer on Stackoverflow
Solution 2 - RhadleyView Answer on Stackoverflow
Solution 3 - RShaneView Answer on Stackoverflow
Solution 4 - RShaneView Answer on Stackoverflow
Solution 5 - RJeromy AnglimView Answer on Stackoverflow
Solution 6 - RMatt ParkerView Answer on Stackoverflow
Solution 7 - RrcoderView Answer on Stackoverflow
Solution 8 - RkmmView Answer on Stackoverflow
Solution 9 - RJouni K. SeppänenView Answer on Stackoverflow
Solution 10 - RdankView Answer on Stackoverflow
Solution 11 - RmoodymudskipperView Answer on Stackoverflow
Solution 12 - RBrendan OConnorView Answer on Stackoverflow
Solution 13 - RPaulHurleyukView Answer on Stackoverflow
Solution 14 - RjciloaView Answer on Stackoverflow