Classification of detectors, extractors and matchers
I am new to opencv and trying to implement image matching between two images. For this purpose, I'm trying to understand the difference between feature descriptors, descriptor extractors and descriptor matchers. I came across a lot of terms and tried to read about them on the opencv documentation website but I just can't seem to wrap my head around the concepts. I understood the basic difference here. Difference between Feature Detection and Descriptor Extraction
But I came across the following terms while studying on the topic :
FAST, GFTT, SIFT, SURF, MSER, STAR, ORB, BRISK, FREAK, BRIEF
I understand how FAST, SIFT, SURF work but can't seem to figure out which ones of the above are only detectors and which are extractors.
Then there are the matchers.
FlannBased, BruteForce, knnMatch and probably some others.
After some reading, I figured that certain matchers can only be used with certain extractors as explained here. How Does OpenCV ORB Feature Detector Work? The classification given is quite clear but it's only for a few extractors and I don't understand the difference between float and uchar.
So basically, can someone please
- classify the types of detectors, extractors and matchers based on float and uchar, as mentioned, or some other type of classification?
- explain the difference between the float and uchar classification or whichever classification is being used?
- mention how to initialize (code) various types of detectors, extractors and matchers?
I know its asking for a lot but I'll be highly grateful. Thank you.
I understand how FAST, SIFT, SURF work but can't seem to figure out which ones of the above are only detectors and which are extractors.
Basically, from that list of feature detectors/extractors (link to articles: FAST, GFTT, SIFT, SURF, MSER, STAR, ORB, BRISK, FREAK, BRIEF), some of them are only feature detectors (FAST, GFTT) others are both feature detectors and descriptor extractors (SIFT, SURF, ORB, FREAK).
If I remember correctly, BRIEF is only a descriptor extractor, so it needs features detected by some other algorithm like FAST or ORB.
To be sure which is which, you have to either browse the article related to the algorithm or browse opencv documentation to see which was implemented for the FeatureDetector
class or which was for the DescriptorExtractor
class.
Q1: classify the types of detectors, extractors and matchers based on float and uchar, as mentioned, or some other type of classification?
Q2: explain the difference between the float and uchar classification or whichever classification is being used?
Regarding questions 1 and 2, to classify them as float and uchar, the link you already posted is the best reference I know, maybe someone will be able to complete it.
Q3: mention how to initialize (code) various types of detectors, extractors and matchers?
Answering question 3, OpenCV made the code to use the various types quite the same - mainly you have to choose one feature detector. Most of the difference is in choosing the type of matcher and you already mentioned the 3 ones that OpenCV has. Your best bet here is to read the documentation, code samples, and related Stack Overflow questions. Also, some blog posts are an excellent source of information, like these series of feature detector benchmarks by Ievgen Khvedchenia (The blog is no longer available so I had to create a raw text copy from its google cache).
Matchers are used to find if a descriptor is similar to another descriptor from a list. You can either compare your query descriptor with all other descriptors from the list (BruteForce) or you use a better heuristic (FlannBased, knnMatch). The problem is that the heuristics do not work for all types of descriptors. For example, FlannBased implementation used to work only with float
descriptors but not with uchar
's (But since 2.4.0, FlannBased with LSH index can be applied to uchar descriptors).
Quoting this App-Solut blog post about the DescriptorMatcher
types:
The DescriptorMatcher comes in the varieties “FlannBased”, “BruteForceMatcher”, “BruteForce-L1” and “BruteForce-HammingLUT”. The “FlannBased” matcher uses the flann (fast library for approximate nearest neighbors) library under the hood to perform faster but approximate matching. The “BruteForce-*” versions exhaustively searche the dictionary to find the closest match for an image feature to a word in the dictionary.
Some of the more popular combinations are:
Feature Detectors / Decriptor Extractors / Matchers types
(FAST, SURF) / SURF / FlannBased
(FAST, SIFT) / SIFT / FlannBased
(FAST, ORB) / ORB / Bruteforce
(FAST, ORB) / BRIEF / Bruteforce
(FAST, SURF) / FREAK / Bruteforce
You might have also noticed there are a few adapters (Dynamic, Pyramid, Grid) to the feature detectors. The App-Solut blog post summarizes really nicely their use:
(...) and there are also a couple of adapters one can use to change the behavior of the key point detectors. For example the
Dynamic
adapter which adjusts a detector type specific detection threshold until enough key-points are found in an image or thePyramid
adapter which constructs a Gaussian pyramid to detect points on multiple scales. ThePyramid
adapter is useful for feature descriptors which are not scale invariant.
Further reading:
This blog post by Yu Lu does a very nice summary description on SIFT, FAST, SURF, BRIEF, ORB, BRISK and FREAK.
These series of posts by Gil Levi also do detailed summaries for several of these algorithms (BRIEF, ORB, BRISK and FREAK).