Find all duplicate files by MD5 hash

I'm trying to find all duplicate files (based on MD5 hash) and ordered by file size. So far I have this:

find . -type f -print0 | xargs -0 -I "{}" sh -c 'md5sum "{}" |  cut -f1 -d " " | tr "\n" " "; du -h "{}"' | sort -h -k2 -r | uniq -w32 --all-repeated=separate

The output of this is:

1832348bb0c3b0b8a637a3eaf13d9f22 4.0K   ./picture.sh
1832348bb0c3b0b8a637a3eaf13d9f22 4.0K   ./picture2.sh
1832348bb0c3b0b8a637a3eaf13d9f22 4.0K   ./picture2.s

d41d8cd98f00b204e9800998ecf8427e 0      ./test(1).log

Is this the most efficient way?


From "man xargs": -I implies -L 1 So this is not most efficient. It would be more efficient, if you just give as many filenames to md5sum as possible, which would be:

find . -type f -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate

Then you won't have the file size of course. If you really need the file size, create a shell script, which does the md5sum and du -h and merge the lines with join.