How to get more than 1000 objects from S3 by using list_objects_v2?

PythonAmazon S3Boto3

Python Problem Overview


I have more than 500,000 objects on s3. I am trying to get the size of each object. I am using the following python code for that

import boto3

bucket = 'bucket'
prefix = 'prefix'

contents = boto3.client('s3').list_objects_v2(Bucket=bucket,  MaxKeys=1000, Prefix=prefix)["Contents"]

for c in contents:
    print(c["Size"])

But it just gave me the size of the top 1000 objects. Based on the documentation we can't get more than 1000. Is there any way I can get more than that?

Python Solutions


Solution 1 - Python

The inbuilt boto3 Paginator class is the easiest way to overcome the 1000 record limitation of list-objects-v2. This can be implemented as follows

s3 = boto3.client('s3')

paginator = s3.get_paginator('list_objects_v2')
pages = paginator.paginate(Bucket='bucket', Prefix='prefix')

for page in pages:
    for obj in page['Contents']:
        print(obj['Size'])

For more details: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Paginator.ListObjectsV2

Solution 2 - Python

Use the ContinuationToken returned in the response as a parameter for subsequent calls, until the IsTruncated value returned in the response is false.

This can be factored into a neat generator function:

def get_all_s3_objects(s3, **base_kwargs):
    continuation_token = None
    while True:
        list_kwargs = dict(MaxKeys=1000, **base_kwargs)
        if continuation_token:
            list_kwargs['ContinuationToken'] = continuation_token
        response = s3.list_objects_v2(**list_kwargs)
        yield from response.get('Contents', [])
        if not response.get('IsTruncated'):  # At the end of the list?
            break
        continuation_token = response.get('NextContinuationToken')

for file in get_all_s3_objects(boto3.client('s3'), Bucket=bucket, Prefix=prefix):
    print(file['Size'])

Solution 3 - Python

If you don't NEED to use the boto3.client you can use boto3.resource to get a complete list of your files:

s3r = boto3.resource('s3')
bucket = s3r.Bucket('bucket_name')
files_in_bucket = list(bucket.objects.all())

Then to get the size just:

sizes = [f.size for f in files_in_bucket]

Depending on the size of your bucket this might take a minute.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
Questiontahir siddiquiView Question on Stackoverflow
Solution 1 - PythonJ TaskerView Answer on Stackoverflow
Solution 2 - PythonAKXView Answer on Stackoverflow
Solution 3 - PythonseeiespiView Answer on Stackoverflow