Image fingerprint to compare similarity of many images

Normal hashing or CRC calculation algorithms do not work well with image data. The dimensional nature of the information must be taken into account.

If you need extremely robust fingerprinting, such that affine transformations (scaling, rotation, translation, flipping) are accounted for, you can use a Radon transformation on the image source to produce a normative mapping of the image data - store this with each image and then compare just the fingerprints. This is a complex algorithm and not for the faint of heart.

a few simple solutions are possible:

  1. Create a luminosity histogram for the image as a fingerprint
  2. Create scaled down versions of each image as a fingerprint
  3. Combine technique (1) and (2) into a hybrid approach for improved comparison quality

A luminosity histogram (especially one that is separated into RGB components) is a reasonable fingerprint for an image - and can be implemented quite efficiently. Subtracting one histogram from another will produce a new historgram which you can process to decide how similar two images are. Histograms, because the only evaluate the distribution and occurrence of luminosity/color information handle affine transformations quite well. If you quantize each color component's luminosity information down to an 8-bit value, 768 bytes of storage are sufficient for the fingerprint of an image of almost any reasonable size. Luminosity histograms produce false negatives when the color information in an image is manipulated. If you apply transformations like contrast/brightness, posterize, color shifting, luminosity information changes. False positives are also possible with certain types of images ... such as landscapes and images where a single color dominates others.

Using scaled images is another way to reduce the information density of the image to a level that is easier to compare. Reductions below 10% of the original image size generally lose too much of the information to be of use - so an 800x800 pixel image can be scaled down to 80x80 and still provide enough information to perform decent fingerprinting. Unlike histogram data, you have to perform anisotropic scaling of the image data when the source resolutions have varying aspect ratios. In other words, reducing a 300x800 image into an 80x80 thumbnail causes deformation of the image, such that when compared with a 300x500 image (that's very similar) will cause false negatives. Thumbnail fingerprints also often produce false negatives when affine transformations are involved. If you flip or rotate an image, its thumbnail will be quite different from the original and may result in a false positive.

Combining both techniques is a reasonable way to hedge your bets and reduce the occurence of both false positives and false negatives.


There is a much less ad-hoc approach than the scaled down image variants that have been proposed here that retains their general flavor, but which gives a much more rigorous mathematical basis for what is going on.

Take a Haar wavelet of the image. Basically the Haar wavelet is the succession of differences from the lower resolution images to each higher resolution image, but weighted by how deep you are in the 'tree' of mipmaps. The calculation is straightforward. Then once you have the Haar wavelet appropriately weighted, throw away all but the k largest coefficients (in terms of absolute value), normalize the vector and save it.

If you take the dot product of two of those normalized vectors it gives you a measure of similarity with 1 being nearly identical. I posted more information over here.


You should definitely take a look at phash.

For image comparison there is this php project : https://github.com/kennethrapp/phasher

And my little javascript clone: https://redaktor.me/phasher/demo_js/index.html

Unfortunately this is "bitcount"-based but will recognize rotated images. Another approach in javascript was to build a luminosity histogram from the image by the help of canvas. You can visualize a polygon histogram on the canvas and compare that polygon in your database (e.g. mySQL spatial ...)


A long time ago I worked on a system that had some similar characteristics, and this is an approximation of the algorithm we followed:

  1. Divide the picture into zones. In our case we were dealing with 4:3 resolution video, so we used 12 zones. Doing this takes the resolution of the source images out of the picture.
  2. For each zone, calculate an overall color - the average of all pixels in the zone
  3. For the entire image, calculate an overall color - the average of all zones

So for each image, you're storing n + 1 integer values, where n is the number of zones you're tracking.

For comparisons, you also need to look at each color channel individually.

  1. For the overall image, compare the color channels for the overall colors to see if they are within a certain threshold - say, 10%
  2. If the images are within the threshold, next compare each zone. If all zones also are within the threshold, the images are a strong enough match that you can at least flag them for further comparison.

This lets you quickly discard images that are not matches; you can also use more zones and/or apply the algorithm recursively to get stronger match confidence.


Similar to Ic's answer - you might try comparing the images at multiple resolutions. So each image get saved as 1x1, 2x2, 4x4 .. 800x800. If the lowest resolution doesn't match (subject to a threshold), you can immediately reject it. If it does match, you can compare them at the next higher resolution, and so on..

Also - if the images share any similar structure, such as medical images, you might be able to extract that structure into a description that is easier/faster to compare.