First of all, I see you are overwriting objPts[0] and not using objPts[1 ] to [3]:
objPts[0].x = 0 ;
objPts[0].y = 0 ;
objPts[1].x = board_w-1 ;
objPts[0].y = 0 ;
objPts[0].x = 0 ;
objPts[0].y = board_h-1 ;
objPts[0].x = board_w-1 ;
objPts[0].y = board_h-1 ;
Now, what getPerspectiveTransform does is to find the transformation that would transform a set of points p0...p3 into p0'...p3' supposing they are related by a homography. If you want to use warpPerspective with such a transformation, both sets p0...p3 and p0'...p3' must be expressed in image coordinates (pixels), which is not the case as objPts is expressed in arbitrary space coordinates.
You are saying to getPerspectiveTransform that you want the following mapping between the corners of your chess pattern in image coordinates (pixels) and the resulting image coordinates (pixels):
corner0 -> (0,0)
corner1 -> (board_w-1, 0)
corner2 -> (0, board_h-1)
corner3 -> (board_w-1, board_h-1)
So, if board_w is, say, 10, you map your chessboard to a 10 pixel wide image! Which explains the result you are obtaining.
To get what you want, you should use in the second column of the example above the image (pixel) coordinates you want for the chess pattern in the bird's view image. For example, if you want each square to be 10x10 pixels, multiply the values by 10.
Also, keep in mind that you are not correcting lens distortion (which doesn't seem large anyway) and the pattern appears not to be perfectly flat, so you might get some "imperfections" in the resulting image in the form of slightly curved lines.
Hope this helps!