Downloading an entire S3 bucket?

Amazon Web-ServicesAmazon S3Aws Cli

Amazon Web-Services Problem Overview


I noticed that there does not seem to be an option to download an entire s3 bucket from the AWS Management Console.

Is there an easy way to grab everything in one of my buckets? I was thinking about making the root folder public, using wget to grab it all, and then making it private again but I don't know if there's an easier way.

Amazon Web-Services Solutions


Solution 1 - Amazon Web-Services

AWS CLI

See the "AWS CLI Command Reference" for more information.

AWS recently released their Command Line Tools, which work much like boto and can be installed using

sudo easy_install awscli

or

sudo pip install awscli

Once installed, you can then simply run:

aws s3 sync s3://<source_bucket> <local_destination>

For example:

aws s3 sync s3://mybucket .

will download all the objects in mybucket to the current directory.

And will output:

download: s3://mybucket/test.txt to test.txt
download: s3://mybucket/test2.txt to test2.txt

This will download all of your files using a one-way sync. It will not delete any existing files in your current directory unless you specify --delete, and it won't change or delete any files on S3.

You can also do S3 bucket to S3 bucket, or local to S3 bucket sync.

Check out the documentation and other examples.

Whereas the above example is how to download a full bucket, you can also download a folder recursively by performing

aws s3 cp s3://BUCKETNAME/PATH/TO/FOLDER LocalFolderName --recursive

This will instruct the CLI to download all files and folder keys recursively within the PATH/TO/FOLDER directory within the BUCKETNAME bucket.

Solution 2 - Amazon Web-Services

You can use s3cmd to download your bucket:

s3cmd --configure
s3cmd sync s3://bucketnamehere/folder /destination/folder

There is another tool you can use called rclone. This is a code sample in the Rclone documentation:

rclone sync /home/local/directory remote:bucket

Solution 3 - Amazon Web-Services

I've used a few different methods to copy Amazon S3 data to a local machine, including s3cmd, and by far the easiest is Cyberduck.

All you need to do is enter your Amazon credentials and use the simple interface to download, upload, sync any of your buckets, folders or files.

Screenshot

Solution 4 - Amazon Web-Services

You've many options to do that, but the best one is using the AWS CLI.

Here's a walk-through:

  1. Download and install AWS CLI in your machine:

  2. Configure AWS CLI:

enter image description here

Make sure you input valid access and secret keys, which you received when you created the account.

  1. Sync the S3 bucket using:

     aws s3 sync s3://yourbucket /local/path
    

In the above command, replace the following fields:

  • yourbucket >> your S3 bucket that you want to download.
  • /local/path >> path in your local system where you want to download all the files.

Solution 5 - Amazon Web-Services

To download using AWS S3 CLI:

aws s3 cp s3://WholeBucket LocalFolder --recursive
aws s3 cp s3://Bucket/Folder LocalFolder --recursive

To download using code, use the AWS SDK.

To download using GUI, use Cyberduck.

Solution 6 - Amazon Web-Services

The answer by @Layke is good, but if you have a ton of data and don't want to wait forever, you should read "AWS CLI S3 Configuration".

The following commands will tell the AWS CLI to use 1,000 threads to execute jobs (each a small file or one part of a multipart copy) and look ahead 100,000 jobs:

aws configure set default.s3.max_concurrent_requests 1000
aws configure set default.s3.max_queue_size 100000

After running these, you can use the simple sync command:

aws s3 sync s3://source-bucket/source-path s3://destination-bucket/destination-path

or

aws s3 sync s3://source-bucket/source-path c:\my\local\data\path

On a system with CPU 4 cores and 16GB RAM, for cases like mine (3-50GB files) the sync/copy speed went from about 9.5MiB/s to 700+MiB/s, a speed increase of 70x over the default configuration.

Solution 7 - Amazon Web-Services

If you use Visual Studio, download "AWS Toolkit for Visual Studio".

After installed, go to Visual Studio - AWS Explorer - S3 - Your bucket - Double click

In the window you will be able to select all files. Right click and download files.

Solution 8 - Amazon Web-Services

For Windows, S3 Browser is the easiest way I have found. It is excellent software, and it is free for non-commercial use.

Solution 9 - Amazon Web-Services

Use this command with the AWS CLI:

aws s3 cp s3://bucketname . --recursive

Solution 10 - Amazon Web-Services

100% works for me, i have download all files from aws s3 backet.

  1. install aws cli (and select your operating system , follow the steps) https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html

  2. check aws version: aws --version

  3. run config command: aws configure

  4. aws s3 cp s3://yourbucketname your\local\path --recursive

Eg (Windows OS): aws s3 cp s3://yourbucketname C:\aws-s3-backup\project-name --recursive

check out this link: https://www.edureka.co/community/32955/how-to-download-an-entire-bucket-from-s3-to-local-folder

Solution 11 - Amazon Web-Services

Another option that could help some OS X users is Transmit.

It's an FTP program that also lets you connect to your S3 files. And, it has an option to mount any FTP or S3 storage as a folder in the Finder, but it's only for a limited time.

Solution 12 - Amazon Web-Services

I've done a bit of development for S3 and I have not found a simple way to download a whole bucket.

If you want to code in Java the jets3t lib is easy to use to create a list of buckets and iterate over that list to download them.

First, get a public private key set from the AWS management consule so you can create an S3service object:

AWSCredentials awsCredentials = new AWSCredentials(YourAccessKey, YourAwsSecretKey);
s3Service = new RestS3Service(awsCredentials);

Then, get an array of your buckets objects:

S3Object[] objects = s3Service.listObjects(YourBucketNameString);

Finally, iterate over that array to download the objects one at a time with:

S3Object obj = s3Service.getObject(bucket, fileName);
            file = obj.getDataInputStream();

I put the connection code in a threadsafe singleton. The necessary try/catch syntax has been omitted for obvious reasons.

If you'd rather code in Python you could use Boto instead.

After looking around BucketExplorer, "Downloading the whole bucket" may do what you want.

Solution 13 - Amazon Web-Services

AWS sdk API will only best option for upload entire folder and repo to s3 and download entire bucket of s3 to locally.

For uploading whole folder to s3

aws s3 sync . s3://BucketName

for download whole s3 bucket locally

aws s3 sync s3://BucketName . 

you can also assign path As like BucketName/Path for particular folder in s3 to download

Solution 14 - Amazon Web-Services

You can do this with https://github.com/minio/mc :

mc cp -r https://s3-us-west-2.amazonaws.com/bucketName/ localdir

mc also supports sessions, resumable downloads, uploads and many more. mc supports Linux, OS X and Windows operating systems. Written in Golang and released under Apache Version 2.0.

Solution 15 - Amazon Web-Services

If you only want to download the bucket from AWS, first install the AWS CLI in your machine. In terminal change the directory to where you want to download the files and run this command.

aws s3 sync s3://bucket-name .

If you also want to sync the both local and s3 directories (in case you added some files in local folder), run this command:

aws s3 sync . s3://bucket-name

Solution 16 - Amazon Web-Services

If you use Firefox with S3Fox, that DOES let you select all files (shift-select first and last) and right-click and download all.

I've done it with 500+ files without any problem.

Solution 17 - Amazon Web-Services

To add another GUI option, we use WinSCP's S3 functionality. It's very easy to connect, only requiring your access key and secret key in the UI. You can then browse and download whatever files you require from any accessible buckets, including recursive downloads of nested folders.

Since it can be a challenge to clear new software through security and WinSCP is fairly prevalent, it can be really beneficial to just use it rather than try to install a more specialized utility.

Solution 18 - Amazon Web-Services

AWS CLI is the best option to download an entire S3 bucket locally.

  1. Install AWS CLI.

  2. Configure AWS CLI for using default security credentials and default AWS Region.

  3. To download the entire S3 bucket use command

    aws s3 sync s3://yourbucketname localpath

Reference to use AWS cli for different AWS services: https://docs.aws.amazon.com/cli/latest/reference/

Solution 19 - Amazon Web-Services

When in Windows, my preferred GUI tool for this is CloudBerry Explorer Freeware for Amazon S3. It has a fairly polished file explorer and FTP-like interface.

Solution 20 - Amazon Web-Services

If you have only files there (no subdirectories) a quick solution is to select all the files (click on the first, Shift+click on the last) and hit Enter or right click and select Open. For most of the data files this will download them straight to your computer.

Solution 21 - Amazon Web-Services

Try this command:

aws s3 sync yourBucketnameDirectory yourLocalDirectory

For example, if your bucket name is myBucket and local directory is c:\local, then:

aws s3 sync s3://myBucket c:\local

For more informations about awscli check this aws cli installation

Solution 22 - Amazon Web-Services

  1. Windows User need to download S3EXPLORER from this link which also has installation instructions :- http://s3browser.com/download.aspx

  2. Then provide you AWS credentials like secretkey, accesskey and region to the s3explorer, this link contains configuration instruction for s3explorer:Copy Paste Link in brower: s3browser.com/s3browser-first-run.aspx

  3. Now your all s3 buckets would be visible on left panel of s3explorer.

  4. Simply select the bucket, and click on Buckets menu on top left corner, then select Download all files to option from the menu. Below is the screenshot for the same:

Bucket Selection Screen

  1. Then browse a folder to download the bucket at a particular place

  2. Click on OK and your download would begin.

Solution 23 - Amazon Web-Services

aws sync is the perfect solution. It does not do a two way.. it is a one way from source to destination. Also, if you have lots of items in bucket it will be a good idea to create s3 endpoint first so that download happens faster (because download does not happen via internet but via intranet) and no charges

Solution 24 - Amazon Web-Services

As @layke said, it is the best practice to download the file from the S3 cli it is a safe and secure. But in some cases, people need to use wget to download the file and here is the solution

aws s3 presign s3://<your_bucket_name/>

This will presign will get you temporary public URL which you can use to download content from S3 using the presign_url, in your case using wget or any other download client.

Solution 25 - Amazon Web-Services

You just need to pass --recursive & --include "*"

aws --region "${BUCKET_REGION}" s3 cp s3://${BUCKET}${BUCKET_PATH}/ ${LOCAL_PATH}/tmp --recursive --include "*" 2>&1

Solution 26 - Amazon Web-Services

It's always better to use awscli for downloading / uploading files to s3. Sync will help you to resume without any hassle.

aws s3 sync s3://bucketname/ .

Solution 27 - Amazon Web-Services

In addition to the suggestions for aws s3 sync, I would also recommend looking at s5cmd (https://github.com/peak/s5cmd).

In my experience I found this to be substantially faster than the AWS CLI for multiple downloads or large downloads.

s5cmd supports wildcards so something like this would work:

s5cmd cp s3://bucket-name/* ./folder

Solution 28 - Amazon Web-Services

You can use sync to download whole S3 bucket. For example, to download whole bucket named bucket1 on current directory.

aws s3 sync s3://bucket1 .

Solution 29 - Amazon Web-Services

Here is some stuff to download all buckets, list them, list their contents.

    //connection string
    private static void dBConnection() {
	app.setAwsCredentials(CONST.getAccessKey(), CONST.getSecretKey());
    conn = new AmazonS3Client(app.getAwsCredentials());
    app.setListOfBuckets(conn.listBuckets());
    System.out.println(CONST.getConnectionSuccessfullMessage());
   	}

    private static void downloadBucket() {

	do {
		for (S3ObjectSummary objectSummary : app.getS3Object().getObjectSummaries()) {
			app.setBucketKey(objectSummary.getKey());
			app.setBucketName(objectSummary.getBucketName());
			if(objectSummary.getKey().contains(CONST.getDesiredKey())){
				//DOWNLOAD
				try 
				{
				    s3Client = new AmazonS3Client(new ProfileCredentialsProvider());
					s3Client.getObject(
							new GetObjectRequest(app.getBucketName(),app.getBucketKey()),
							new File(app.getDownloadedBucket())
							);
				} catch (IOException e) {
					e.printStackTrace();
				}

				do
				{
					 if(app.getBackUpExist() == true){
						System.out.println("Converting back up file");
						app.setCurrentPacsId(objectSummary.getKey());
						passIn = app.getDataBaseFile();
						CONVERT= new DataConversion(passIn);
						System.out.println(CONST.getFileDownloadedMessage());
					}
				}
				while(app.getObjectExist()==true);
				
				if(app.getObjectExist()== false)
				{
					app.setNoObjectFound(true);
				}
			}
		}
		app.setS3Object(conn.listNextBatchOfObjects(app.getS3Object()));
	} 
	while (app.getS3Object().isTruncated());
}

/----------------------------Extension Methods-------------------------------------/

//Unzip bucket after download 
public static void unzipBucket() throws IOException {
	unzip = new UnZipBuckets();
	unzip.unZipIt(app.getDownloadedBucket());
	System.out.println(CONST.getFileUnzippedMessage());
}

//list all S3 buckets
public static void listAllBuckets(){
	for (Bucket bucket : app.getListOfBuckets()) {
		String bucketName = bucket.getName();
		System.out.println(bucketName + "\t" + StringUtils.fromDate(bucket.getCreationDate()));
	}
}

//Get the contents from the auto back up bucket
public static void listAllBucketContents(){    	
	do {
		for (S3ObjectSummary objectSummary : app.getS3Object().getObjectSummaries()) {
			if(objectSummary.getKey().contains(CONST.getDesiredKey())){
				System.out.println(objectSummary.getKey() + "\t" + objectSummary.getSize() + "\t" + StringUtils.fromDate(objectSummary.getLastModified()));
				app.setBackUpCount(app.getBackUpCount() + 1);	
			}
		}
		app.setS3Object(conn.listNextBatchOfObjects(app.getS3Object()));
	} 
	while (app.getS3Object().isTruncated());
	System.out.println("There are a total of : " + app.getBackUpCount() + " buckets.");
}

}

Solution 30 - Amazon Web-Services

You may simple get it with s3cmd command:

s3cmd get --recursive --continue s3://test-bucket local-directory/

Solution 31 - Amazon Web-Services

As Neel Bhaat has explained in this blog, there are many different tools that can be used for this purpose. Some are AWS provided, where most are third party tools. All these tools require you to save your AWS account key and secret in the tool itself. Be very cautious when using third party tools, as the credentials you save in might cost you, your entire worth and drop you dead.

Therefore, I always recommend using the AWS CLI for this purpose. You can simply install this from this link. Next, run the following command and save your key, secret values in AWS CLI.

aws configure

And use the following command to sync your AWS S3 Bucket to your local machine. (The local machine should have AWS CLI installed)

aws s3 sync <source> <destination>

Examples:

  1. For AWS S3 to Local Storage

    aws s3 sync

  2. From Local Storage to AWS S3

    aws s3 sync

  3. From AWS s3 bucket to another bucket

    aws s3 sync

Solution 32 - Amazon Web-Services

If the bucket is quite big there is a command called s4cmd which makes parallel connections and improves the download time:

To install it on Debian like

apt install s4cmd

If you have pip:

pip install s4cmd

It will read the ~/.s3cfg file if present (if not install s3cmd and run s3cmd --configure) or you can specify --access-key=ACCESS_KEY --secret-key=SECRET_KEY on the command.

The cli is similar to s3cmd. In your case a sync is recommended as you can cancel the download and start it again without having to re-downloaded the files.

s4cmd [--access-key=ACCESS_KEY --secret-key=SECRET_KEY] sync s3://<your-bucket> /some/local/dir

Be careful if you download a lot of data (>1TB) this may impact your bill, calculate first which will be the cost

Solution 33 - Amazon Web-Services

Here is a summary of what you have to do to copy an entire bucket:

1. Create a user that can operate with AWS s3 bucket

Follow this offical article: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html Don't forget to:

  • tick "programmatic access" in order to have the possibility to deal with with AWS via CLI.
  • add the right IAM policy to your user to allow him to interact with the s3 bucket
2. Download, install and configure AWS CLI

See this link allowing to configure it: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html

You can use the following command in order to add the keys you got when you created your user:

$ aws configure
AWS Access Key ID [None]: <your_access_key>
AWS Secret Access Key [None]: <your_secret_key>
Default region name [None]: us-west-2
Default output format [None]: json
3. Use the following command to download content

You can a recursive cp commande, but aws sync command is f:

aws s3 sync s3://your_bucket /local/path

For example, the below command will show all the .png file presents in the bucket. Replay the command without --dryrun to make the resulting files be downloaded.

aws s3 sync s3://your_bucket /local/path --recursive --exclude "*" --include "*.png" --dryrun

Solution 34 - Amazon Web-Services

You can use this AWS cli command to download entire S3 bucket content to local folder

aws s3 sync s3://your-bucket-name "Local Folder Path"

If you see error like this

fatal error: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)

--no-verify-ssl (boolean)

By default, the AWS CLI uses SSL when communicating with AWS services. For each SSL connection, the AWS CLI will verify SSL certificates. This option overrides the default behavior of verifying SSL certificates. reference

Use this tag with command --no-verify-ssl

aws s3 sync s3://your-bucket-name "Local Folder Path" --no-verify-ssl

Solution 35 - Amazon Web-Services

use boto3 to download all objects in a bucket with a certain prefix

import boto3

s3 = boto3.client('s3', region_name='us-east-1', 
                     aws_access_key_id=AWS_KEY_ID, 
                     aws_secret_access_key=AWS_SECRET)

def get_all_s3_keys(bucket,prefix):
    keys = []

    kwargs = {'Bucket': bucket,Prefix=prefix}
    while True:
        resp = s3.list_objects_v2(**kwargs)
        for obj in resp['Contents']:
             keys.append(obj['Key'])

        try:
            kwargs['ContinuationToken'] = resp['NextContinuationToken']
        except KeyError:
            break

        return keys

def download_file(file_name, bucket,key):
    file=s3.download_file(
	Filename=file_name,
	Bucket=bucket,
	Key=key)
 	return file

bucket="gid-folder"
prefix="test_"
keys=get_all_s3_keys(bucket,prefix):

for key in keys:
     download_file(key, bucket,key)
   
 

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionrugbertView Question on Stackoverflow
Solution 1 - Amazon Web-ServicesLaykeView Answer on Stackoverflow
Solution 2 - Amazon Web-ServicesPhil M.View Answer on Stackoverflow
Solution 3 - Amazon Web-ServiceswedocandoView Answer on Stackoverflow
Solution 4 - Amazon Web-ServicesDarshan LilaView Answer on Stackoverflow
Solution 5 - Amazon Web-ServicesSarat ChandraView Answer on Stackoverflow
Solution 6 - Amazon Web-ServicesJamesView Answer on Stackoverflow
Solution 7 - Amazon Web-ServicesIves.meView Answer on Stackoverflow
Solution 8 - Amazon Web-ServicesdworradView Answer on Stackoverflow
Solution 9 - Amazon Web-ServicesashackView Answer on Stackoverflow
Solution 10 - Amazon Web-ServicesNajathiView Answer on Stackoverflow
Solution 11 - Amazon Web-ServicesDiederikView Answer on Stackoverflow
Solution 12 - Amazon Web-ServicesjeremyjjbrownView Answer on Stackoverflow
Solution 13 - Amazon Web-ServicesHarsh ManvarView Answer on Stackoverflow
Solution 14 - Amazon Web-ServicesKrishna SrinivasView Answer on Stackoverflow
Solution 15 - Amazon Web-ServicesMuzammilView Answer on Stackoverflow
Solution 16 - Amazon Web-ServicesjpwView Answer on Stackoverflow
Solution 17 - Amazon Web-ServicesbsplosionView Answer on Stackoverflow
Solution 18 - Amazon Web-Servicessingh30View Answer on Stackoverflow
Solution 19 - Amazon Web-ServicesfundeadView Answer on Stackoverflow
Solution 20 - Amazon Web-ServicesLukasz CzerwinskiView Answer on Stackoverflow
Solution 21 - Amazon Web-ServicesPrimitView Answer on Stackoverflow
Solution 22 - Amazon Web-ServicesPatrick RView Answer on Stackoverflow
Solution 23 - Amazon Web-ServicesDeepak SinghalView Answer on Stackoverflow
Solution 24 - Amazon Web-ServicesBalaji JBView Answer on Stackoverflow
Solution 25 - Amazon Web-ServicesPraveen GowdaView Answer on Stackoverflow
Solution 26 - Amazon Web-ServicesJobin JosephView Answer on Stackoverflow
Solution 27 - Amazon Web-ServiceswrschneiderView Answer on Stackoverflow
Solution 28 - Amazon Web-Servicesuser1445267View Answer on Stackoverflow
Solution 29 - Amazon Web-ServicesJohn HanewichView Answer on Stackoverflow
Solution 30 - Amazon Web-ServicesHubbitusView Answer on Stackoverflow
Solution 31 - Amazon Web-ServicesKeet SugathadasaView Answer on Stackoverflow
Solution 32 - Amazon Web-ServicesbartomeuView Answer on Stackoverflow
Solution 33 - Amazon Web-ServicesvebenView Answer on Stackoverflow
Solution 34 - Amazon Web-ServicesDimuthuView Answer on Stackoverflow
Solution 35 - Amazon Web-ServicesGolden LionView Answer on Stackoverflow