Detect and visualize differences between two images with OpenCV Python

To visualize differences between two images, we can take a quantitative approach to determine the exact discrepancies between images using the Structural Similarity Index (SSIM) which was introduced in Image Quality Assessment: From Error Visibility to Structural Similarity. This method is already implemented in the scikit-image library for image processing. You can install scikit-image with pip install scikit-image.

Using the structural_similarity() function from scikit-image, it returns a score and a difference image, diff. The score represents the structural similarity index between the two input images and can fall between the range [-1,1] with values closer to one representing higher similarity. But since you're only interested in where the two images differ, the diff image is what we'll focus on. Specifically, the diff image contains the actual image differences with darker regions having more disparity. Larger areas of disparity are highlighted in black while smaller differences are in gray.

The gray noisy areas are probably due to .jpg lossy compression. We would obtain a cleaner result if we used a lossless compression image format. The SSIM score after comparing the two images show that they are very similar.

Image similarity 0.9198863419190031

Now we filter through the diff image since we only want to find the large differences between the images. We iterate through each contour, filter using a minimum threshold area to remove the gray noise, and highlight the differences with a bounding box. Here's the result.

To visualize the exact differences, we fill the contours onto a mask and on the original image.

from skimage.metrics import structural_similarity
import cv2
import numpy as np

before = cv2.imread('left.jpg')
after = cv2.imread('right.jpg')

# Convert images to grayscale
before_gray = cv2.cvtColor(before, cv2.COLOR_BGR2GRAY)
after_gray = cv2.cvtColor(after, cv2.COLOR_BGR2GRAY)

# Compute SSIM between two images
(score, diff) = structural_similarity(before_gray, after_gray, full=True)
print("Image similarity", score)

# The diff image contains the actual image differences between the two images
# and is represented as a floating point data type in the range [0,1] 
# so we must convert the array to 8-bit unsigned integers in the range
# [0,255] before we can use it with OpenCV
diff = (diff * 255).astype("uint8")

# Threshold the difference image, followed by finding contours to
# obtain the regions of the two input images that differ
thresh = cv2.threshold(diff, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
contours = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]

mask = np.zeros(before.shape, dtype='uint8')
filled_after = after.copy()

for c in contours:
    area = cv2.contourArea(c)
    if area > 40:
        x,y,w,h = cv2.boundingRect(c)
        cv2.rectangle(before, (x, y), (x + w, y + h), (36,255,12), 2)
        cv2.rectangle(after, (x, y), (x + w, y + h), (36,255,12), 2)
        cv2.drawContours(mask, [c], 0, (0,255,0), -1)
        cv2.drawContours(filled_after, [c], 0, (0,255,0), -1)

cv2.imshow('before', before)
cv2.imshow('after', after)
cv2.imshow('diff',diff)
cv2.imshow('mask',mask)
cv2.imshow('filled after',filled_after)
cv2.waitKey(0)

Note: scikit-image version used is 0.18.1. In previous versions, the function was skimage.measure.compare_ssim but has been depreciated and removed in 0.18.1. According to the docs, the functionality still exists but is now under the new skimage.metrics submodule under different names. The new updated function is skimage.metrics.structural_similarity


One great way of quickly identifying differences between two images is using an animated GIF like this:

enter image description here

The process is described and the code is available here. It can be pretty readily adapted to Python. As is, it uses ImageMagick which is installed on most Linux distros and is available for macOS and Windows.

Just for reference, I used this command in Terminal:

flicker_cmp -o result.gif -r x400 a.jpg b.jpg

If you are willing to use Imagemagick, then you can use its compare tool. Since your images are JPG, they will show differences due to the compression of each. So I add -fuzz 15% to allow a 15% tolerance in the difference without showing that. The result will show red (by default) where the images are different. But the color can be changed.

Linux comes with Imagemagick. Versions are also available for Mac OSX and Windows.

There is also Python Wand, which uses Imagemagick.

compare -metric rmse -fuzz 25% left.jpg right.jpg diff.png


enter image description here

An alternate method is to use a lower fuzz value and use morphologic processing to remove the noise and fill in a little.

The uses convert and first copies the left image and whitens it. Then copies the left image again and fills it with red. Then copies the left image and does a difference operation with the right using a lower fuzz value of 10%. This will leave more noise in the image, but give better representations of the true regions. So I use morphologic smoothing to remove the noise. Finally, I use the last image as a mask to composite red over the whitened left image.

convert left.jpg \
\( -clone 0 -fill white -colorize 50% \) \
\( -clone 0 -fill red -colorize 100 \) \
\( -clone 0 right.jpg -compose difference -composite -threshold 10% -morphology smooth diamond:1 \) \
-delete 0 \
-compose over -composite \
result.png


enter image description here