Question

If I have already computed the intrinsics(Camera matrix and distortion coefficients) in the lab.

I have then moved the cameras out in the real world field. I used about 6-10 known locations in the real world, to estimate the camera pose using SolvePnP(). So I also have the two cameras rotation and translation.

I now want to use the 2 cameras to create a stereoCorrespondence. Question is: Do I have to use stereoCalibrate()?

Or can I call StereoRectify right away, using the already known intrinsics? It says StereoRectify() expects a rotation and translation vector/matrix, The documation says it expects:

"The rotation matrix between the 1st and the 2nd cameras’ coordinate systems."

Since I have the camera pose of both cameras, can I simply subtract the 2 translation-vectors and rotation vectors I got from SolvePnP, and pass the result to StereoRectify()? (Both cameras use the same common object-point reference system)

Était-ce utile?

La solution

Calibrating the cameras to the world (e.g. your known locations) is different from calibrating them in respect with each other. In your case you can subtract the translation vectors. This will give you the translation from one camera to the other (provided you calibrated them with the same fixed point). You can also obtain the inter-camera rotation matrix, but this can't be done by simply subtracting them, you need more complicated math. That's why I'd advise you to use stereo calibration provided by opencv.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top