How to get the md5sum of a file on Amazon's S3

Amazon S3

Amazon S3 Problem Overview


If I have existing files on Amazon's S3, what's the easiest way to get their md5sum without having to download the files?

Amazon S3 Solutions


Solution 1 - Amazon S3

AWS's documentation of ETag says:

> The entity tag is a hash of the object. The ETag reflects changes only to the contents of an object, not its metadata. The ETag may or may not be an MD5 digest of the object data. Whether or not it is depends on how the object was created and how it is encrypted as described below: > > - Objects created by the PUT Object, POST Object, or Copy operation, or through the AWS Management Console, and are encrypted by SSE-S3 or plaintext, have ETags that are an MD5 digest of their object data. > - Objects created by the PUT Object, POST Object, or Copy operation, or through the AWS Management Console, and are encrypted by SSE-C or SSE-KMS, have ETags that are not an MD5 digest of their object data. > - If an object is created by either the Multipart Upload or Part Copy operation, the ETag is not an MD5 digest, regardless of the method of encryption.

Reference: http://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html

Solution 2 - Amazon S3

ETag does not seem to be MD5 for multipart uploads (as per Gael Fraiteur's comment). In these cases it contains a suffix of minus and a number. However, even the bit before the minus does not seem to be the MD5, even though it is the same length as an MD5. Possibly the suffix is the number of parts uploaded?

Solution 3 - Amazon S3

This is a very old question, but I had a hard time find the information below, and this is one of the first places I could find, so I wanted to detail it in case anyone needs.

ETag is a MD5. But for the Multipart uploaded files, the MD5 is computed from the concatenation of the MD5s of each uploaded part. So you don't need to compute the MD5 in the server. Just get the ETag and it's all.

As @EmersonFarrugia said in this answer:

> Say you uploaded a 14MB file and your part size is 5MB. Calculate 3 MD5 checksums corresponding to each part, i.e. the checksum of the first 5MB, the second 5MB, and the last 4MB. Then take the checksum of their concatenation. Since MD5 checksums are hex representations of binary data, just make sure you take the MD5 of the decoded binary concatenation, not of the ASCII or UTF-8 encoded concatenation. When that's done, add a hyphen and the number of parts to get the ETag.

So the only other things you need is the ETag and the upload part size. But the ETag has a -NumberOfParts suffix. So you can divide the size by the suffix and get part size. 5Mb is the minimum part size and the default value. The part size has to be integer, so you can't get things like 7,25Mb each part size. So it should be easy get the part size information.

Here is a script to make this in osx, with a Linux version in comments: https://gist.github.com/emersonf/7413337

I'll leave both script here in case the page above is no longer accessible in the future:

Linux version:

#!/bin/bash
set -euo pipefail
if [ $# -ne 2 ]; then
    echo "Usage: $0 file partSizeInMb";
    exit 0;
fi
file=$1
if [ ! -f "$file" ]; then
    echo "Error: $file not found." 
    exit 1;
fi
partSizeInMb=$2
fileSizeInMb=$(du -m "$file" | cut -f 1)
parts=$((fileSizeInMb / partSizeInMb))
if [[ $((fileSizeInMb % partSizeInMb)) -gt 0 ]]; then
    parts=$((parts + 1));
fi
checksumFile=$(mktemp -t s3md5.XXXXXXXXXXXXX)
for (( part=0; part<$parts; part++ ))
do
    skip=$((partSizeInMb * part))
    $(dd bs=1M count=$partSizeInMb skip=$skip if="$file" 2> /dev/null | md5sum >> $checksumFile)
done
etag=$(echo $(xxd -r -p $checksumFile | md5sum)-$parts | sed 's/ --/-/')
echo -e "${1}\t${etag}"
rm $checksumFile

OSX version:

#!/bin/bash

if [ $# -ne 2 ]; then
    echo "Usage: $0 file partSizeInMb";
    exit 0;
fi

file=$1

if [ ! -f "$file" ]; then
    echo "Error: $file not found." 
    exit 1;
fi

partSizeInMb=$2
fileSizeInMb=$(du -m "$file" | cut -f 1)
parts=$((fileSizeInMb / partSizeInMb))
if [[ $((fileSizeInMb % partSizeInMb)) -gt 0 ]]; then
    parts=$((parts + 1));
fi

checksumFile=$(mktemp -t s3md5)

for (( part=0; part<$parts; part++ ))
do
    skip=$((partSizeInMb * part))
    $(dd bs=1m count=$partSizeInMb skip=$skip if="$file" 2>/dev/null | md5 >>$checksumFile)
done

echo $(xxd -r -p $checksumFile | md5)-$parts
rm $checksumFile

Solution 4 - Amazon S3

Below that's work for me to compare local file checksum with s3 etag. I used Python

def md5_checksum(filename):
    m = hashlib.md5()
    with open(filename, 'rb') as f:
        for data in iter(lambda: f.read(1024 * 1024), b''):
            m.update(data)
   
    return m.hexdigest()


def etag_checksum(filename, chunk_size=8 * 1024 * 1024):
    md5s = []
    with open(filename, 'rb') as f:
        for data in iter(lambda: f.read(chunk_size), b''):
            md5s.append(hashlib.md5(data).digest())
    m = hashlib.md5(b"".join(md5s))
    print('{}-{}'.format(m.hexdigest(), len(md5s)))
    return '{}-{}'.format(m.hexdigest(), len(md5s))

def etag_compare(filename, etag):
    et = etag[1:-1] # strip quotes
    print('et',et)
    if '-' in et and et == etag_checksum(filename):
        return True
    if '-' not in et and et == md5_checksum(filename):
        return True
    return False


def main():   
    session = boto3.Session(
        aws_access_key_id=s3_accesskey,
        aws_secret_access_key=s3_secret
    )
    s3 = session.client('s3')
    obj_dict = s3.get_object(Bucket=bucket_name, Key=your_key)

    etag = (obj_dict['ETag'])
    print('etag', etag)
    
    validation = etag_compare(filename,etag)
    print(validation)
    etag_checksum(filename, chunk_size=8 * 1024 * 1024)
    return validation

Solution 5 - Amazon S3

For anyone who spend time to search around to find out that why the md5 not the same as ETag in S3.

ETag will calculate against chuck of data and concat all md5hash to make md5 hash again and keep the number of chunk at the end.

Here is C# version to generate hash

    string etag = HashOf("file.txt",8);

source code

    private string HashOf(string filename,int chunkSizeInMb)
    {
        string returnMD5 = string.Empty;
        int chunkSize = chunkSizeInMb * 1024 * 1024;

        using (var crypto = new MD5CryptoServiceProvider())
        {
            int hashLength = crypto.HashSize/8;
            
            using (var stream = File.OpenRead(filename))
            {
                if (stream.Length > chunkSize)
                {
                    int chunkCount = (int)Math.Ceiling((double)stream.Length/(double)chunkSize);

                    byte[] hash = new byte[chunkCount*hashLength];
                    Stream hashStream = new MemoryStream(hash);
                    
                    long nByteLeftToRead = stream.Length;
                    while (nByteLeftToRead > 0)
                    {
                        int nByteCurrentRead = (int)Math.Min(nByteLeftToRead, chunkSize);
                        byte[] buffer = new byte[nByteCurrentRead];
                        nByteLeftToRead -= stream.Read(buffer, 0, nByteCurrentRead);
                        
                        byte[] tmpHash = crypto.ComputeHash(buffer);

                        hashStream.Write(tmpHash, 0, hashLength);

                    }

                    returnMD5 = BitConverter.ToString(crypto.ComputeHash(hash)).Replace("-", string.Empty).ToLower()+"-"+ chunkCount;
                }
                else {
                    returnMD5 = BitConverter.ToString(crypto.ComputeHash(stream)).Replace("-", string.Empty).ToLower();
                    
                }
                stream.Close();
            }
        }
        return returnMD5;
    }

Solution 6 - Amazon S3

I found that s3cmd has a --list-md5 option that can be used with the ls command, e.g.

s3cmd ls --list-md5 s3://bucket_of_mine/

Hope this helps.

Solution 7 - Amazon S3

The easiest way would be to set the checksum yourself as metadata before you upload these files to your bucket :

ObjectMetadata md = new ObjectMetadata();
md.setContentMD5("foobar");
PutObjectRequest req = new PutObjectRequest(BUCKET, KEY, new File("/path/to/file")).withMetadata(md);
tm.upload(req).waitForUploadResult();

Now you can access these metadata without downloading the file :

ObjectMetadata md2 = s3Client.getObjectMetadata(BUCKET, KEY);
System.out.println(md2.getContentMD5());

source : https://github.com/aws/aws-sdk-java/issues/1711

Solution 8 - Amazon S3

As of 2022-02-25, S3 features a new Checksum Retrieval function GetObjectAttributes:

New – Additional Checksum Algorithms for Amazon S3 | AWS News Blog

> Checksum Retrieval – The new GetObjectAttributes function returns the checksum for the object and (if applicable) for each part.

This function supports SHA-1, SHA-256, CRC-32, and CRC-32C for checking the integrity of the transmission.

It appears that MD5 is actually not an option for the new features, so this may not resolve your original question, but MD5 is deprecated for lots of reasons, and if use of an alternate checksum works for you, this may be what you're looking for.

Solution 9 - Amazon S3

I have cross checked jets3t and management console against uploaded files' MD5sum, and ETag seems to be equal to MD5sum. You can just view properties of the file in AWS management console:

https://console.aws.amazon.com/s3/home

Solution 10 - Amazon S3

I have used the following approach with success. I present here a Python fragment with notes.

Let's suppose we want the MD5 checksum for an object stored in S3 and that the object was loaded using the multipart upload process. The ETag value stored with the object is not the MD5 checksum we want. The following Python commands can be used to stream the binary of the object, without downloading or opening the object file, to compute the MD5 checksum we want. Please note this approach assumes a connection to the S3 account containing the object has been established, and that the boto3 and hashlib modules have been imported:

#
# specify the S3 object...
#
bucket_name = "raw-data"
object_key = "/date/study-name/sample-name/file-name"
s3_object = s3.Object(bucket_name, object_key)

#
# compute the MD5 checksum for the specified object...
#
s3_object_md5 = hashlib.md5(s3_object.get()['Body'].read()).hexdigest()

This approach works for all objects stored in S3 (i.e., objects that have been loaded with or without using the multipart upload process).

Solution 11 - Amazon S3

This works for me. In PHP, you can compare the checksum between local file e amazon file using this:



// get localfile md5
$checksum_local_file = md5_file ( '/home/file' );

// compare checksum between localfile and s3file	
public function compareChecksumFile($file_s3, $checksum_local_file) {

	$Connection = new AmazonS3 ();
	$bucket = amazon_bucket;
	$header = $Connection->get_object_headers( $bucket, $file_s3 );

	// get header
	if (empty ( $header ) || ! is_object ( $header )) {
		throw new RuntimeException('checksum error');
	}
	$head = $header->header;
	if (empty ( $head ) || !is_array($head)) {
		throw new RuntimeException('checksum error');
	}
	// get etag (md5 amazon)
	$etag = $head['etag'];
	if (empty ( $etag )) {
		throw new RuntimeException('checksum error');
	}
	// remove quotes
	$checksumS3 = str_replace('"', '', $etag);

	// compare md5
	if ($checksum_local_file === $checksumS3) {
		return TRUE;
	} else {
		return FALSE;
	}
}


Solution 12 - Amazon S3

Here's the code to get the S3 ETag for an object in PowerShell converted from c#.

function Get-ETag {
  [CmdletBinding()]
  param(
    [Parameter(Mandatory=$true)]
    [string]$Path,
    [Parameter(Mandatory=$true)]
    [int]$ChunkSizeInMb
  )

  $returnMD5 = [string]::Empty
  [int]$chunkSize = $ChunkSizeInMb * [Math]::Pow(2, 20)
  
  $crypto = New-Object System.Security.Cryptography.MD5CryptoServiceProvider
  [int]$hashLength = $crypto.HashSize / 8
  
  $stream = [System.IO.File]::OpenRead($Path)
  
  if($stream.Length -gt $chunkSize) {
    $chunkCount = [int][Math]::Ceiling([double]$stream.Length / [double]$chunkSize)
    [byte[]]$hash = New-Object byte[]($chunkCount * $hashLength)
    $hashStream = New-Object System.IO.MemoryStream(,$hash)
    [long]$numBytesLeftToRead = $stream.Length
    while($numBytesLeftToRead -gt 0) {
      $numBytesCurrentRead = [int][Math]::Min($numBytesLeftToRead, $chunkSize)
      $buffer = New-Object byte[] $numBytesCurrentRead
      $numBytesLeftToRead -= $stream.Read($buffer, 0, $numBytesCurrentRead)
      $tmpHash = $crypto.ComputeHash($buffer)
      $hashStream.Write($tmpHash, 0, $hashLength)
    }
    $returnMD5 = [System.BitConverter]::ToString($crypto.ComputeHash($hash)).Replace("-", "").ToLower() + "-" + $chunkCount
  }
  else {
    $returnMD5 = [System.BitConverter]::ToString($crypto.ComputeHash($stream)).Replace("-", "").ToLower()
  }
    
  $stream.Close()  
  $returnMD5
}

Solution 13 - Amazon S3

Here is the code to get MD5 hash as per 2017

import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import org.apache.commons.codec.binary.Base64;
public class GenerateMD5 {
public static void main(String args[]) throws Exception{
	String s = "<CORSConfiguration> <CORSRule> <AllowedOrigin>http://www.example.com</AllowedOrigin> <AllowedMethod>PUT</AllowedMethod> <AllowedMethod>POST</AllowedMethod> <AllowedMethod>DELETE</AllowedMethod> <AllowedHeader>*</AllowedHeader> <MaxAgeSeconds>3000</MaxAgeSeconds> </CORSRule> <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>GET</AllowedMethod> <AllowedHeader>*</AllowedHeader> <MaxAgeSeconds>3000</MaxAgeSeconds> </CORSRule> </CORSConfiguration>";
	    
	    MessageDigest md = MessageDigest.getInstance("MD5");
	    md.update(s.getBytes());
	    byte[] digest = md.digest();
	    StringBuffer sb = new StringBuffer();
	    /*for (byte b : digest) {
		    sb.append(String.format("%02x", b & 0xff));
	    }*/
	    System.out.println(sb.toString());
	    StringBuffer sbi = new StringBuffer();
	    byte [] bytes = Base64.encodeBase64(digest);
	    String finalString = new String(bytes);
	    System.out.println(finalString);
    }
}

The commented code is where most people get it wrong changing it to hex

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionSwitchView Question on Stackoverflow
Solution 1 - Amazon S3DennisView Answer on Stackoverflow
Solution 2 - Amazon S3Duncan HarrisView Answer on Stackoverflow
Solution 3 - Amazon S3Nelson TeixeiraView Answer on Stackoverflow
Solution 4 - Amazon S3li xinView Answer on Stackoverflow
Solution 5 - Amazon S3Pitipong GuntawongView Answer on Stackoverflow
Solution 6 - Amazon S3ahnkleView Answer on Stackoverflow
Solution 7 - Amazon S3TristanView Answer on Stackoverflow
Solution 8 - Amazon S3nealmcbView Answer on Stackoverflow
Solution 9 - Amazon S3b10yView Answer on Stackoverflow
Solution 10 - Amazon S3Lawrence RichView Answer on Stackoverflow
Solution 11 - Amazon S3Rômulo Z. C. CunhaView Answer on Stackoverflow
Solution 12 - Amazon S3Andrew MarwoodView Answer on Stackoverflow
Solution 13 - Amazon S3Bharadwaj_TurlapatiView Answer on Stackoverflow