Automatic perspective correction OpenCV

I am trying to implement Automatic perspective correction in my iOS program and when I use the test image I found on the tutorial everything works as expected. But when I take a picture I get back a weird result.

I am using code found in this tutorial

When I give it an image that looks like this:

enter image description here

I get this as the result:

enter image description here

Here is what dst gives me that might help.

enter image description here

I am using this to call the method which contains the code.

quadSegmentation(Img, bw, dst, quad);

Can anyone tell me when I am getting so many green lines compared to the tutorial? And how I might be able to fix this and properly crop the image to only contain the card?


For perspective transform you need,

source points->Coordinates of quadrangle vertices in the source image.

destination points-> Coordinates of the corresponding quadrangle vertices in the destination image.

Here we will calculate these point by contour process.

Calculate Coordinates of quadrangle vertices in the source image

  • You will get the your card as contour by just by blurring, thresholding, then find contour, find largest contour etc..
  • After finding largest contour just calculate approximates a polygonal curve, here you should get 4 Point which represent corners of your card. You can adjust the parameter epsilon to make 4 co-ordinates.

enter image description here

Calculate Coordinates of the corresponding quadrangle vertices in the destination image

  • This can be easily find out by calculating bounding rectangle for largest contour.

enter image description here

In below image the red rectangle represent source points and green for destination points.

enter image description here

Adjust the co-ordinates order and Apply Perspective transform

  • Here I manually adjust the co-ordinates order and you can use some sorting algorithm.
  • Then calculate transformation matrix and apply wrapPrespective

See the final result

enter image description here

Code

 Mat src=imread("card.jpg");
 Mat thr;
 cvtColor(src,thr,CV_BGR2GRAY);
 threshold( thr, thr, 70, 255,CV_THRESH_BINARY );

 vector< vector <Point> > contours; // Vector for storing contour
 vector< Vec4i > hierarchy;
 int largest_contour_index=0;
 int largest_area=0;

 Mat dst(src.rows,src.cols,CV_8UC1,Scalar::all(0)); //create destination image
 findContours( thr.clone(), contours, hierarchy,CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE ); // Find the contours in the image
 for( int i = 0; i< contours.size(); i++ ){
    double a=contourArea( contours[i],false);  //  Find the area of contour
    if(a>largest_area){
    largest_area=a;
    largest_contour_index=i;                //Store the index of largest contour
    }
 }

 drawContours( dst,contours, largest_contour_index, Scalar(255,255,255),CV_FILLED, 8, hierarchy );
 vector<vector<Point> > contours_poly(1);
 approxPolyDP( Mat(contours[largest_contour_index]), contours_poly[0],5, true );
 Rect boundRect=boundingRect(contours[largest_contour_index]);
 if(contours_poly[0].size()==4){
    std::vector<Point2f> quad_pts;
    std::vector<Point2f> squre_pts;
    quad_pts.push_back(Point2f(contours_poly[0][0].x,contours_poly[0][0].y));
    quad_pts.push_back(Point2f(contours_poly[0][1].x,contours_poly[0][1].y));
    quad_pts.push_back(Point2f(contours_poly[0][3].x,contours_poly[0][3].y));
    quad_pts.push_back(Point2f(contours_poly[0][2].x,contours_poly[0][2].y));
    squre_pts.push_back(Point2f(boundRect.x,boundRect.y));
    squre_pts.push_back(Point2f(boundRect.x,boundRect.y+boundRect.height));
    squre_pts.push_back(Point2f(boundRect.x+boundRect.width,boundRect.y));
    squre_pts.push_back(Point2f(boundRect.x+boundRect.width,boundRect.y+boundRect.height));

    Mat transmtx = getPerspectiveTransform(quad_pts,squre_pts);
    Mat transformed = Mat::zeros(src.rows, src.cols, CV_8UC3);
    warpPerspective(src, transformed, transmtx, src.size());
    Point P1=contours_poly[0][0];
    Point P2=contours_poly[0][1];
    Point P3=contours_poly[0][2];
    Point P4=contours_poly[0][3];


    line(src,P1,P2, Scalar(0,0,255),1,CV_AA,0);
    line(src,P2,P3, Scalar(0,0,255),1,CV_AA,0);
    line(src,P3,P4, Scalar(0,0,255),1,CV_AA,0);
    line(src,P4,P1, Scalar(0,0,255),1,CV_AA,0);
    rectangle(src,boundRect,Scalar(0,255,0),1,8,0);
    rectangle(transformed,boundRect,Scalar(0,255,0),1,8,0);

    imshow("quadrilateral", transformed);
    imshow("thr",thr);
    imshow("dst",dst);
    imshow("src",src);
    imwrite("result1.jpg",dst);
    imwrite("result2.jpg",src);
    imwrite("result3.jpg",transformed);
    waitKey();
   }
   else
    cout<<"Make sure that your are getting 4 corner using approxPolyDP..."<<endl;

teethe This typically happens when you rely on somebody else code to solve your particular problem instead of adopting the code. Look at the processing stages and also the difference between their and your image (it is a good idea by the way to start with their image and make sure the code works):

  1. Get the edge map. - will probably work since your edges are fine
  2. Detect lines with Hough transform. - fail since you have lines not only on the contour but also inside of your card. So expect a lot of false alarm lines
  3. Get the corners by finding intersections between lines. - fail for the above mentioned reason
  4. Check if the approximate polygonal curve has 4 vertices. - fail
  5. Determine top-left, bottom-left, top-right, and bottom-right corner. - fail
  6. Apply the perspective transformation. - fail completely

To fix your problem you have to ensure that only lines on the periphery are extracted. If you always have a dark background you can use this fact to discard the lines with other contrasts/polarities. Alternatively you can extract all the lines and then select the ones that are closest to the image boundary (if your background doesn't have lines).