Using R to download zipped data file, extract, and import data

RConnectionZipR Faq

R Problem Overview


@EZGraphs on Twitter writes: "Lots of online csvs are zipped. Is there a way to download, unzip the archive, and load the data to a data.frame using R? #Rstats"

I was also trying to do this today, but ended up just downloading the zip file manually.

I tried something like:

fileName <- "http://www.newcl.org/data/zipfiles/a1.zip"
con1 <- unz(fileName, filename="a1.dat", open = "r")

but I feel as if I'm a long way off. Any thoughts?

R Solutions


Solution 1 - R

Zip archives are actually more a 'filesystem' with content metadata etc. See help(unzip) for details. So to do what you sketch out above you need to

  1. Create a temp. file name (eg tempfile())
  2. Use download.file() to fetch the file into the temp. file
  3. Use unz() to extract the target file from temp. file
  4. Remove the temp file via unlink()

which in code (thanks for basic example, but this is simpler) looks like

temp <- tempfile()
download.file("http://www.newcl.org/data/zipfiles/a1.zip",temp)
data <- read.table(unz(temp, "a1.dat"))
unlink(temp)

Compressed (.z) or gzipped (.gz) or bzip2ed (.bz2) files are just the file and those you can read directly from a connection. So get the data provider to use that instead :)

Solution 2 - R

Just for the record, I tried translating Dirk's answer into code :-P

temp <- tempfile()
download.file("http://www.newcl.org/data/zipfiles/a1.zip",temp)
con <- unz(temp, "a1.dat")
data <- matrix(scan(con),ncol=4,byrow=TRUE)
unlink(temp)

Solution 3 - R

I used CRAN package "downloader" found at http://cran.r-project.org/web/packages/downloader/index.html . Much easier.

download(url, dest="dataset.zip", mode="wb") 
unzip ("dataset.zip", exdir = "./")

Solution 4 - R

For Mac (and I assume Linux)...

If the zip archive contains a single file, you can use the bash command funzip, in conjuction with fread from the data.table package:

library(data.table)
dt <- fread("curl http://www.newcl.org/data/zipfiles/a1.zip | funzip")

In cases where the archive contains multiple files, you can use tar instead to extract a specific file to stdout:

dt <- fread("curl http://www.newcl.org/data/zipfiles/a1.zip | tar -xf- --to-stdout *a1.dat")

Solution 5 - R

Here is an example that works for files which cannot be read in with the read.table function. This example reads a .xls file.

url <-"https://www1.toronto.ca/City_Of_Toronto/Information_Technology/Open_Data/Data_Sets/Assets/Files/fire_stns.zip"

temp <- tempfile()
temp2 <- tempfile()

download.file(url, temp)
unzip(zipfile = temp, exdir = temp2)
data <- read_xls(file.path(temp2, "fire station x_y.xls"))

unlink(c(temp, temp2))

Solution 6 - R

To do this using data.table, I found that the following works. Unfortunately, the link does not work anymore, so I used a link for another data set.

library(data.table)
temp <- tempfile()
download.file("https://www.bls.gov/tus/special.requests/atusact_0315.zip", temp)
timeUse <- fread(unzip(temp, files = "atusact_0315.dat"))
rm(temp)

I know this is possible in a single line since you can pass bash scripts to fread, but I am not sure how to download a .zip file, extract, and pass a single file from that to fread.

Solution 7 - R

Try this code. It works for me:

unzip(zipfile="<directory and filename>",
      exdir="<directory where the content will be extracted>")

Example:

unzip(zipfile="./data/Data.zip",exdir="./data")

Solution 8 - R

I found that the following worked for me. These steps come from BTD's YouTube video, Managing Zipfile's in R:

zip.url <- "url_address.zip"

dir <- getwd()

zip.file <- "file_name.zip"

zip.combine <- as.character(paste(dir, zip.file, sep = "/"))

download.file(zip.url, destfile = zip.combine)

unzip(zip.file)

Solution 9 - R

rio() would be very suitable for this - it uses the file extension of a file name to determine what kind of file it is, so it will work with a large variety of file types. I've also used unzip() to list the file names within the zip file, so its not necessary to specify the file name(s) manually.

library(rio)

# create a temporary directory
td <- tempdir()

# create a temporary file
tf <- tempfile(tmpdir=td, fileext=".zip")

# download file from internet into temporary location
download.file("http://download.companieshouse.gov.uk/BasicCompanyData-part1.zip", tf)

# list zip archive
file_names <- unzip(tf, list=TRUE)

# extract files from zip file
unzip(tf, exdir=td, overwrite=TRUE)

# use when zip file has only one file
data <- import(file.path(td, file_names$Name[1]))

# use when zip file has multiple files
data_multiple <- lapply(file_names$Name, function(x) import(file.path(td, x)))

# delete the files and directories
unlink(td)

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionJeromy AnglimView Question on Stackoverflow
Solution 1 - RDirk EddelbuettelView Answer on Stackoverflow
Solution 2 - RGeorge DontasView Answer on Stackoverflow
Solution 3 - RunixcreeperView Answer on Stackoverflow
Solution 4 - RdnlbrkyView Answer on Stackoverflow
Solution 5 - RColinTeaView Answer on Stackoverflow
Solution 6 - RMallick HossainView Answer on Stackoverflow
Solution 7 - RMarcelo TibauView Answer on Stackoverflow
Solution 8 - RGian ZlupkoView Answer on Stackoverflow
Solution 9 - RcamnesiaView Answer on Stackoverflow