I'm developing an image processing project and I come across the word occlusion in many scientific papers, what do occlusions mean in the context of image processing? The dictionary is only giving a general definition. Can anyone describe them using an image as a context?


Solution 1:

Occlusion means that there is something you want to see, but can't due to some property of your sensor setup, or some event. Exactly how it manifests itself or how you deal with the problem will vary due to the problem at hand.

Some examples:

If you are developing a system which tracks objects (people, cars, ...) then occlusion occurs if an object you are tracking is hidden (occluded) by another object. Like two persons walking past each other, or a car that drives under a bridge. The problem in this case is what you do when an object disappears and reappears again.

If you are using a range camera, then occlusion is areas where you do not have any information. Some laser range cameras works by transmitting a laser beam onto the surface you are examining and then having a camera setup which identifies the point of impact of that laser in the resulting image. That gives the 3D-coordinates of that point. However, since the camera and laser is not necessarily aligned there can be points on the examined surface which the camera can see but the laser can not hit (occlusion). The problem here is more a matter of sensor setup.

The same can occur in stereo imaging if there are parts of the scene which are only seen by one of the two cameras. No range data can obviously be collected from these points.

There are probably more examples.

If you specify your problem, then maybe we can define what occlusion is in that case, and what problems it entails

Solution 2:

The problem of occlusion is one of the main reasons why computer vision is hard in general. Specifically, this is much more problematic in Object Tracking. See the below figures:

enter image description here

Notice, how the lady's face is not completely visible in frames 0519 & 0835 as opposed to the face in frame 0005.


And here's one more picture where the face of the man is partially hidden in all three frames.

partial occlusion


Notice in the below image how the tracking of the couple in red & green bounding box is lost in the middle frame due to occlusion (i.e. partially hidden by another person in front of them) but correctly tracked in the last frame when they become (almost) completely visible.

enter image description here

Picture courtesy: Stanford, USC

Solution 3:

Occlusion is the one which blocks our view. In the image shown here, we can easily see the people in the front row. But the second row is partly visible and third row is much less visible. Here, we say that second row is partly occluded by first row, and third row is occluded by first and second rows. We can see such occlusions in class rooms (students sitting in rows), traffic junctions (vehicles waiting for signal), forests (trees and plants), etc., when there are a lot of objects. enter image description here

Solution 4:

Additionally to what has been said I want to add the following:

  • For Object Tracking, an essential part in dealing with occlusions is writing an efficient cost function, which will be able to discriminate between the occluded object and the object that is occluding it. If the cost function is not ok, the object instances (ids) may swap and the object will be incorrectly tracked. There are numerous ways in which cost functions can be written some methods use CNNs[1] while some prefer to have more control and aggregate features[2]. The disadvantage of CNN models is that in case you are tracking objects that are in the training set in the presence of objects which are not in the training set, and the first ones get occluded, the tracker can latch onto the wrong object and may or may never recover. Here is a video showing this. The disadvantage of aggregate features is that you have to manually engineer the cost function, and this can take time and sometimes knowledge of advanced mathematics.
  • In the case of dense Stereo Vision reconstruction, occlusion happens when a region is seen with the left camera and not seen with the right(or vice versa). In the disparity map this occluded region appears black (because the corresponding pixels in that region have no equivalent in the other image). Some techniques use the so called background filling algorithms which fill the occluded black region with pixels coming from the background. Other reconstruction methods simply let those pixels with no values in the disparity map, because the pixels coming from the background filling method may be incorrect in those regions. Bellow you have the 3D projected points obtained using a dense stereo method. The points were rotated a bit to the right(in the 3D space). In the presented scenario the values in the disparity map which are occluded are left unreconstructed (with black) and due to this reason in the 3D image we see that black "shadow" behind the person.

    enter image description here

Solution 5:

As the other answers have explained the occlusion well, I will only add to that. Basically, there is semantic gap between us and the computers.

Computer actually see every image as the sequence of values, typically in the range 0-255, for every color in RGB Image. These values are indexed in the form of (row, col) for every point in the image. So if the objects change its position w.r.t the camera where some aspect of the object hides (lets hands of a person are not shown), computer will see different numbers (or edges or any other features) so this will change for the computer algorithm to detect, recognize or track the object.