I'm trying to verify my camera calibration, so I'd like to rectify the calibration images. I expect that this will involve using a call to warpPerspective but I do not see an obvious function that takes the camera matrix, and the rotation and translation vectors to generate the perspective matrix for this call.

Essentially I want to do the process described here (see especially the images towards the end) but starting with a known camera model and pose.

Is there a straightforward function call that takes the camera intrinsic and extrinsic parameters and computes the perspective matrix for use in warpPerspective?

I'll be calling warpPerspective after having called undistort on the image.

In principle, I could derive the solution by solving the system of equations defined at the top of the opencv camera calibration documentation after specifying the constraint Z=0, but I figure that there must be a canned routine that will allow me to orthorectify my test images.

In my seearches, I'm finding it hard to wade through all of the stereo calibration results -- I only have one camera, but want to rectify the image under the constraint that I'm only looking a a planar test pattern.

有帮助吗?

解决方案

Actually there is no need to involve an orthographic camera. Here is how you can get the appropriate perspective transform.

If you calibrated the camera using cv::calibrateCamera, you obtained a camera matrix K a vector of lens distortion coefficients D for your camera and, for each image that you used, a rotation vector rvec (which you can convert to a 3x3 matrix R using cv::rodrigues, doc) and a translation vector T. Consider one of these images and the associated R and T. After you called cv::undistort using the distortion coefficients, the image will be like it was acquired by a camera of projection matrix K * [ R | T ].

Basically (as @DavidNilosek intuited), you want to cancel the rotation and get the image as if it was acquired by the projection matrix of form K * [ I | -C ] where C=-R.inv()*T is the camera position. For that, you have to apply the following transformation:

Hr = K * R.inv() * K.inv()

The only potential problem is that the warped image might go outside the visible part of the image plane. Hence, you can use an additional translation to solve that issue, as follows:

     [ 1  0  |         ]
Ht = [ 0  1  | -K*C/Cz ]
     [ 0  0  |         ]

where Cz is the component of C along the Oz axis.

Finally, with the definitions above, H = Ht * Hr is a rectifying perspective transform for the considered image.

其他提示

This is a sketch of what I mean by "solving the system of equations" (in Python):

import cv2
import scipy  # I use scipy by habit; numpy would be fine too
#rvec= the rotation vector
#tvec = the translation *emphasized text*matrix
#A = the camera intrinsic

def unit_vector(v):
    return v/scipy.sqrt(scipy.sum(v*v))

(fx,fy)=(A[0,0], A[1,1])
Ainv=scipy.array( [ [1.0/fx, 0.0, -A[0,2]/fx],
                     [ 0.0,  1.0/fy, -A[1,2]/fy],
                     [ 0.0,    0.0,     1.0] ], dtype=scipy.float32 )
R=cv2.Rodrigues( rvec )
Rinv=scipy.transpose( R )

u=scipy.dot( Rinv, tvec ) # displacement between camera and world coordinate origin, in world coordinates


# corners of the image, for here hard coded
pixel_corners=[ scipy.array( c, dtype=scipy.float32 ) for c in [ (0+0.5,0+0.5,1), (0+0.5,640-0.5,1), (480-0.5,640-0.5,1), (480-0.5,0+0.5,1)] ]
scene_corners=[]
for c in pixel_corners:
    lhat=scipy.dot( Rinv, scipy.dot( Ainv, c) ) #direction of the ray that the corner images, in world coordinates
    s=u[2]/lhat[2]
    # now we have the case that (s*lhat-u)[2]==0,
    # i.e. s is how far along the line of sight that we need
    # to move to get to the Z==0 plane.
    g=s*lhat-u
    scene_corners.append( (g[0], g[1]) )

# now we have: 4 pixel_corners (image coordinates), and 4 corresponding scene_coordinates
# can call cv2.getPerspectiveTransform on them and so on..

For anyone struggling with the alignment of the image when following @BConic's answer, a practical solution is to warp the image corner points using Hr, and define Ht to offset the result:

Hr = K @ R.T @ np.linalg.pinv(K)

# warp image corner points:
w, h = image_size
points = [[0, 0], [0, h - 1], [w - 1, h - 1], [w - 1, 0]]
points = np.array(points, np.float32).reshape(-1, 1, 2)

warped_points = cv2.perspectiveTransform(points, Hr).squeeze()

# get size and offset of warped corner points:
xmin, ymin = warped_points.min(axis=0)
xmax, ymax = warped_points.max(axis=0)

# size:
warped_image_size = int(round(xmax - xmin)), int(round(ymax - ymin))

# offset:
Ht = np.eye(3)
Ht[0, 2] = -xmin
Ht[1, 2] = -ymin

H = Ht @ Hr
许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top