Copy multiple files from s3 bucket

Amazon Web-ServicesAmazon S3Aws Cli

Amazon Web-Services Problem Overview


I am having trouble downloading multiple files from AWS S3 buckets to my local machine.

I have all the filenames that I want to download and I do not want others. How can I do that ? Is there any kind of loop in aws-cli I can do some iteration ?

There are couple hundreds files I need to download so that it seems not possible to use one single command that takes all filenames as arguments.

Amazon Web-Services Solutions


Solution 1 - Amazon Web-Services

Also one can use the --recursive option, as described in the documentation for cp command. It will copy all objects under a specified prefix recursively.

Example:

aws s3 cp s3://folder1/folder2/folder3 . --recursive

will grab all files under folder1/folder2/folder3 and copy them to local directory.

Solution 2 - Amazon Web-Services

You might want to use "sync" instead of "cp". The following will download/sync only the files with the ".txt" extension in your local folder:

aws s3 sync --exclude="*" --include="*.txt" s3://mybucket/mysubbucket .

Solution 3 - Amazon Web-Services

There is a bash script which can read all the filenames from a file filename.txt.

#!/bin/bash  
set -e  
while read line  
do  
  aws s3 cp s3://bucket-name/$line dest-path/  
done <filename.txt

Solution 4 - Amazon Web-Services

As per the doc you can use include and exclude filters with s3 cp as well. So you can do something like this:

aws s3 cp s3://bucket/folder/ . --recursive --exclude="*" --include="2017-12-20*"

Make sure you get the order of exclude and include filters right as that could change the whole meaning.

Solution 5 - Amazon Web-Services

Tried all the above. Not much joy. Finally, adapted @rajan's reply into a one-liner:

for file in whatever*.txt; do { aws s3 cp $file s3://somewhere/in/my/bucket/; } done

Solution 6 - Amazon Web-Services

I wanted to read s3 object keys from a text file and download them to my machine parallelly.

I used this command

cat <filename>.txt | parallel aws s3 cp {} <output_dir>

The contents of my text file looked like this:

s3://bucket-name/file1.wav
s3://bucket-name/file2.wav
s3://bucket-name/file3.wav

Please make sure you don't have an empty line at the end of your text file. You can learn more about GNU parallel here

Solution 7 - Amazon Web-Services

@Rajan's answer is a very good one, however it fails when there is no match found in the *.txt file and the source s3 bucket, however below code resolves also this issue:

#!/bin/bash
while IFS= read -r line; do
aws s3 cp s3://your-s3-source-bucket/folder/$line s3://your-s3-destination/folder/
done <try.txt

The only thing you need is to run the bash file inside you aws notebook.

!chmod +x YOUR-BASH-NAME.sh
!./YOUR-BASH-NAME.sh

Solution 8 - Amazon Web-Services

I got the problem solved, may be a little bit stupid, but it works.

Using python, I write multiple line of AWS download commands on one single .sh file, then I execute it on the terminal.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionDQIView Question on Stackoverflow
Solution 1 - Amazon Web-ServicessiphiuelView Answer on Stackoverflow
Solution 2 - Amazon Web-Servicesf.ciprianiView Answer on Stackoverflow
Solution 3 - Amazon Web-ServicesRajanView Answer on Stackoverflow
Solution 4 - Amazon Web-ServicesChinmay BhattarView Answer on Stackoverflow
Solution 5 - Amazon Web-ServicesHugh PerkinsView Answer on Stackoverflow
Solution 6 - Amazon Web-ServicesroronoaView Answer on Stackoverflow
Solution 7 - Amazon Web-ServicesSheykhmousaView Answer on Stackoverflow
Solution 8 - Amazon Web-ServicesDQIView Answer on Stackoverflow