How to download all links to .zip files on a given web page using wget/curl?

CurlDownloadWget

Curl Problem Overview


A page contains links to a set of .zip files, all of which I want to download. I know this can be done by wget and curl. How is it done?

Curl Solutions


Solution 1 - Curl

The command is:

wget -r -np -l 1 -A zip http://example.com/download/

Options meaning:

-r,  --recursive          specify recursive download.
-np, --no-parent          don't ascend to the parent directory.
-l,  --level=NUMBER       maximum recursion depth (inf or 0 for infinite).
-A,  --accept=LIST        comma-separated list of accepted extensions.

Solution 2 - Curl

Above solution does not work for me. For me only this one works:

wget -r -l1 -H -t1 -nd -N -np -A.mp3 -erobots=off [url of website]

Options meaning:

-r            recursive
-l1           maximum recursion depth (1=use only this directory)
-H            span hosts (visit other hosts in the recursion)
-t1           Number of retries
-nd           Don't make new directories, put downloaded files in this one
-N            turn on timestamping
-A.mp3        download only mp3s
-erobots=off  execute "robots.off" as if it were a part of .wgetrc

Solution 3 - Curl

For other scenarios with some parallel magic I use:

curl [url] | grep -i [filending] | sed -n 's/.*href="\([^"]*\).*/\1/p' |  parallel -N5 wget -

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionuyetchView Question on Stackoverflow
Solution 1 - CurlcreaktiveView Answer on Stackoverflow
Solution 2 - CurlK.-Michael AyeView Answer on Stackoverflow
Solution 3 - CurlM LindbladView Answer on Stackoverflow