Image segmentation based on edge pixel map [closed]
I have trained a classifier in Python for classifying pixels in an image of cells as edge or non edge. I've used it successfully on a few image datasets but am running into problems with this particular dataset, which seems pretty ambiguous even to the human eye. I don't know of any existing automated technique that can segment it accurately.
After prediction I obtain the following image:
I am relatively new to image processing and am unsure with how to proceed with actually obtaining the final segmentations of the cells. I have briefly tried a few different techniques - namely Hough circular transform, level sets, skeletonization, contour finding - but none have really done the trick. Am I just not tuning the parameters correctly or is there a better technique out there?
Here are the correct outlines, by the way, for reference.
And the original image:
And the continuous probability map:
Very nice work on boundary detection. I used to work on similar segmentation problems.
Theory:
Once you obtained your edge map where e(i,j)
indicates the "edge-iness" degree of pixel i,j
you would like a segmentation of the image that would respect the edge map as possible.
In order to formulate this "respect the edge map" in a more formal fashion I suggest you look at the Correlation clustering (CC) functional:
The CC functional asses the quality of a segmentation based on pair-wise relations between neighboring pixels whether they should be in the same cluster (no edge between them) or in different clusters (there is an edge between them).
Take a look at the example at section 7.1 of the aforementioned paper.
CC is used for similar segmentation problems in medical (neuronal) imaging as well, see e.g., here.
Practice
Once you convince yourself that CC is indeed an appropriate formulation for your problem, there is still the question of how exactly to convert your binary edge map into an affinity matrix that CC can process. Bear in mind that CC needs as an input a (usually sparse) adjacency matrix with positive entries for pairs of pixels assuming to belong to the same segment, and negative entries for pairs of pixels assumed to belong in different segments.
Here's my suggestion:
-
The edges in your edge map looks quite thick and not well localize. I suggest a non-max supression, or morphological thining as a pre-processing stage.
-
Once you have a better localized edges, you ignore the "edge" pixels and only work with the "non-edge" pixels, lets call them "active".
Two active pixels that are next to each other: there is no "edge" pixel between them - they should be together. So the adjecency matrix for immidiate nieghbors should have positive entires.
Consider three pixels on a line, with the two endpoints are "active" pixels: if the middle one is an edge then the two active pixels should not belong to the same cluster - the corresponding entries in the adjecency matrix should be negative. If the middle pixel is also active than the corresponding entries in the adjecency matrix should be positive. -
Consider all possible neighboring pairs and triplets (inducing 24-connected grid graph) allows you to construct an affinity matrix with positive and negative entries suitable for CC.
-
Given a matrix you should search for the segmentation with the best CC score (optimization stage). I have Matlab code for this here. You can also use the excellent openGM package.
-
The optimization will result with a partition of the active pixels only, you can map it back to the input image domain, leaving the edge pixels as un-assigned to any segment.
Seeing the picture of the edges/non edges pixel in the classifier, we can see that the gradient image of your input already basically gives the result of the classifier you have learnt. But the confidence map shows a good solution except that: 1. they are connected levelsets, with varying sizes. 2. you have noisy bright spots in the cells that cause false outputs from the classifier. (maybe some smoothing could be considered) 3. I guess it would be probably easier to characterize the internal of each cell: the grayscale variations, the average size. Learning these distributions would probably get you better detection results. Topologically we have a set of low grayscale values nested in large grayscale values. To perform this one could use Graphcuts with GMM model for the unitary costs and a learnt gradient distribution for the pairwise term
I think your Hough transform is a good idea. One thing you should try (if you don't already), is to threshold your image before you run it through your tranform, though the article I just linked seems to only be binary thresholding. What this might do is to exaggerate differences between the edge and the background, so it might be easier to detect. Basically, apply a function (in the form of a filter which operates on the pixel's value) to each pixel.
Another thing you can try is active contours. Basically, you lay down some circles and they move through the image until they find what you're looking for.
My last idea is maybe try a wavelet transform. These seem to work pretty well at picking out boundaries and borders in images. Hope these ideas can get you started.