Taking one camera and moving it around to take two images of the same object, from a different viewpoint, one should be able to compute a matrix that relates these two scenes. In OpenCV, how is this accomplished?

有帮助吗?

解决方案

If said object is a calibration pattern like the chessboard used by OpenCV, then the camera calibration routine mentioned by ChrisO would give you both the camera intrinsics (focal length, principal point, and lens distortion) as well as the camera extrinsics (where they are relatively in space).

If you have general object, then you need to establish a set of 2D correspondences which you can feed into cvFindFundamentalMat. This finds the fundamental matrix which relates the two perspectives. Namely, for each point x in camera 1 and corresponding point x' in camera 2, x'Fx = 0. You can similarly find the epipoles, etc. This uses the 8 point algorithm which requires at least 8 point pairs of point correspondences.

You can get the correspondences either manually or with a robust feature extractor and matcher along the lines of MSER/Affine Harris + SIFT.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top