What is the algorithm to compute the Amazon-S3 Etag for a file larger than 5GB?

Say you uploaded a 14MB file to a bucket without server-side encryption, and your part size is 5MB. Calculate 3 MD5 checksums corresponding to each part, i.e. the checksum of the first 5MB, the second 5MB, and the last 4MB. Then take the checksum of their concatenation. MD5 checksums are often printed as hex representations of binary data, so make sure you take the MD5 of the decoded binary concatenation, not of the ASCII or UTF-8 encoded concatenation. When that's done, add a hyphen and the number of parts to get the ETag.

Here are the commands to do it on Mac OS X from the console:

$ dd bs=1m count=5 skip=0 if=someFile | md5 >>checksums.txt
5+0 records in
5+0 records out
5242880 bytes transferred in 0.019611 secs (267345449 bytes/sec)
$ dd bs=1m count=5 skip=5 if=someFile | md5 >>checksums.txt
5+0 records in
5+0 records out
5242880 bytes transferred in 0.019182 secs (273323380 bytes/sec)
$ dd bs=1m count=5 skip=10 if=someFile | md5 >>checksums.txt
2+1 records in
2+1 records out
2599812 bytes transferred in 0.011112 secs (233964895 bytes/sec)

At this point all the checksums are in checksums.txt. To concatenate them and decode the hex and get the MD5 checksum of the lot, just use

$ xxd -r -p checksums.txt | md5

And now append "-3" to get the ETag, since there were 3 parts.

Notes

  • If you uploaded with aws-cli via aws s3 cp then you most likely have a 8MB chunksize. According to the docs, that is the default.
  • If the bucket has server-side encryption (SSE) turned on, the ETag won't be the MD5 checksum (see the API documentation). But if you're just trying to verify that an uploaded part matches what you sent, you can use the Content-MD5 header and S3 will compare it for you.
  • md5 on macOS just writes out the checksum, but md5sum on Linux/brew also outputs the filename. You'll need to strip that, but I'm sure there's some option to only output the checksums. You don't need to worry about whitespace cause xxd will ignore it.

Code Links

  • A Gist I wrote with a working script for macOS.
  • The project at s3md5.

Based on answers here, I wrote a Python implementation which correctly calculates both multi-part and single-part file ETags.

def calculate_s3_etag(file_path, chunk_size=8 * 1024 * 1024):
    md5s = []

    with open(file_path, 'rb') as fp:
        while True:
            data = fp.read(chunk_size)
            if not data:
                break
            md5s.append(hashlib.md5(data))

    if len(md5s) < 1:
        return '"{}"'.format(hashlib.md5().hexdigest())

    if len(md5s) == 1:
        return '"{}"'.format(md5s[0].hexdigest())

    digests = b''.join(m.digest() for m in md5s)
    digests_md5 = hashlib.md5(digests)
    return '"{}-{}"'.format(digests_md5.hexdigest(), len(md5s))

The default chunk_size is 8 MB used by the official aws cli tool, and it does multipart upload for 2+ chunks. It should work under both Python 2 and 3.


bash implementation

python implementation

The algorithm literally is (copied from the readme in the python implementation) :

  1. md5 the chunks
  2. glob the md5 strings together
  3. convert the glob to binary
  4. md5 the binary of the globbed chunk md5s
  5. append "-Number_of_chunks" to the end of the md5 string of the binary

Not sure if it can help:

We're currently doing an ugly (but so far useful) hack to fix those wrong ETags in multipart uploaded files, which consists on applying a change to the file in the bucket; that triggers a md5 recalculation from Amazon that changes the ETag to matches with the actual md5 signature.

In our case:

File: bucket/Foo.mpg.gpg

  1. ETag obtained: "3f92dffef0a11d175e60fb8b958b4e6e-2"
  2. Do something with the file (rename it, add a meta-data like a fake header, among others)
  3. Etag obtained: "c1d903ca1bb6dc68778ef21e74cc15b0"

We don't know the algorithm, but since we can "fix" the ETag we don't need to worry about it either.


Same algorithm, java version: (BaseEncoding, Hasher, Hashing, etc comes from the guava library

/**
 * Generate checksum for object came from multipart upload</p>
 * </p>
 * AWS S3 spec: Entity tag that identifies the newly created object's data. Objects with different object data will have different entity tags. The entity tag is an opaque string. The entity tag may or may not be an MD5 digest of the object data. If the entity tag is not an MD5 digest of the object data, it will contain one or more nonhexadecimal characters and/or will consist of less than 32 or more than 32 hexadecimal digits.</p> 
 * Algorithm follows AWS S3 implementation: https://github.com/Teachnova/s3md5</p>
 */
private static String calculateChecksumForMultipartUpload(List<String> md5s) {      
    StringBuilder stringBuilder = new StringBuilder();
    for (String md5:md5s) {
        stringBuilder.append(md5);
    }

    String hex = stringBuilder.toString();
    byte raw[] = BaseEncoding.base16().decode(hex.toUpperCase());
    Hasher hasher = Hashing.md5().newHasher();
    hasher.putBytes(raw);
    String digest = hasher.hash().toString();

    return digest + "-" + md5s.size();
}