Question

I'm using OpenCV to try to get a bird's eye projection of this image:this image
I first find all the inner corners of the chessboard and draw them, as shown here image
I then use warpPerspective() on it but it yields me an extremely tiny warped image as shown here here. Can anyone figure out what is causing this?

Here is my code:

#include <ros/ros.h>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <vector>

using namespace cv ;

int main(int argc, char* argv[] ) {
  ros::init( argc, argv, "bird_view" ) ;
  int board_w = atoi(argv[1]);
  int board_h = atoi(argv[2]);

  cv::Size board_sz( board_w, board_h );
  cv::Mat image = cv::imread( "image.jpg" ) ;
  cv::Mat gray_image, tmp, H , birds_image;
  cv::Point2f objPts[4], imgPts[4] ;
  std::vector<cv::Point2f> corners ;
  float Z = 1 ; //have experimented from values as low as .1 and as high as 100
  int key = 0 ;
  int found = cv::findChessboardCorners( image, board_sz, corners ) ;

  if (found) {
    cv::drawChessboardCorners(image, board_sz, corners, 1) ;
    cvtColor( image, gray_image, CV_RGB2GRAY ) ;
    cornerSubPix( gray_image, corners, Size(11, 11), Size(-1, -1),
                  TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 30, 0.1) ) ;
    cv::resize( image, tmp, Size(), .5, .5 ) ;
    namedWindow( "IMAGE" ) ;
    cv::imshow( "IMAGE" , tmp ) ;
    waitKey(0) ;
  }

  objPts[0].x = 0 ;
  objPts[0].y = 0 ;
  objPts[1].x = board_w-1 ;
  objPts[0].y = 0 ;
  objPts[0].x = 0 ;
  objPts[0].y = board_h-1 ;
  objPts[0].x = board_w-1 ;
  objPts[0].y = board_h-1 ;

  imgPts[0] = corners.at(0) ;
  imgPts[1] = corners.at(board_w-1) ;
  imgPts[2] = corners.at((board_h-1) * board_w) ;
  imgPts[3] = corners.at((board_h-1) * board_w + board_w-1) ;

  H = cv::getPerspectiveTransform( objPts, imgPts ) ;

  birds_image = image ;

  while ( key != 27 ) {

    H.at<float>(2,2) = Z ;

    cv::warpPerspective( image, birds_image, H, Size( 2 * image.cols, 2 * tmp.rows ) ,
                         CV_INTER_LINEAR | CV_WARP_INVERSE_MAP | CV_WARP_FILL_OUTLIERS ) ;

    cv::imshow( "IMAGE", birds_image ) ;
    cv::waitKey(0) ;
  }

  return 0 ;
}

All this code is based on O'Reilly's OpenCV book's bird's eye projection example. I think the yielded picture is correct but I am not certain until I see it for sure.

Was it helpful?

Solution

First of all, I see you are overwriting objPts[0] and not using objPts[1 ] to [3]:

objPts[0].x = 0 ;
objPts[0].y = 0 ;
objPts[1].x = board_w-1 ;
objPts[0].y = 0 ;
objPts[0].x = 0 ;
objPts[0].y = board_h-1 ;
objPts[0].x = board_w-1 ;
objPts[0].y = board_h-1 ;

Now, what getPerspectiveTransform does is to find the transformation that would transform a set of points p0...p3 into p0'...p3' supposing they are related by a homography. If you want to use warpPerspective with such a transformation, both sets p0...p3 and p0'...p3' must be expressed in image coordinates (pixels), which is not the case as objPts is expressed in arbitrary space coordinates.

You are saying to getPerspectiveTransform that you want the following mapping between the corners of your chess pattern in image coordinates (pixels) and the resulting image coordinates (pixels):

corner0 -> (0,0)
corner1 -> (board_w-1, 0)
corner2 -> (0, board_h-1)
corner3 -> (board_w-1, board_h-1)

So, if board_w is, say, 10, you map your chessboard to a 10 pixel wide image! Which explains the result you are obtaining.

To get what you want, you should use in the second column of the example above the image (pixel) coordinates you want for the chess pattern in the bird's view image. For example, if you want each square to be 10x10 pixels, multiply the values by 10.

Also, keep in mind that you are not correcting lens distortion (which doesn't seem large anyway) and the pattern appears not to be perfectly flat, so you might get some "imperfections" in the resulting image in the form of slightly curved lines.

Hope this helps!

OTHER TIPS

So thanks to Milo's clarification on what getPerspectiveTransform does, I slightly changed what I did when specifying points to map:

std::vector<cv::Point2f> objPts(4) ;
objPts[0].x = 250 ;
objPts[0].y = 250 ;
objPts[1].x = 250 + (board_w-1) * 25 ;
objPts[1].y = 250 ;
objPts[2].x = 250 ;
objPts[2].y = 250 + (board_h-1) * 25 ;
objPts[3].x = 250 + (board_w-1) * 25 ;
objPts[3].y = 250 + (board_h-1) * 25 ;

And it works great! It will vary for each image so be play around with the base value and how big/small you want to transform the image to.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top