I am building a 3D tracking system that uses up to 12 cameras. I understand the OpenCV function StereoCalibrate can find the relative position and orientation of two cameras. I need to do this for 12 cameras. Is there a OpenCV function or available source code that can achieve the desired result?

I am using the StereoCalibrate function and simply apply it to all combinations of pairs of cameras. Is their a more efficient way?

有帮助吗?

解决方案

You probably don't want to calibrate the cameras in pairs. At least initially it's preferable to calibrate the camera position/poses independently from each other, all with respect to to a fixed reference frame in the scene. After you have a viable solution of this kind you may refine it for specific pairs, if you plan to do stereo on them.

It's hard to give you advice without more details on the geometry of your setup (for example, are the camera all looking at a common subject?). Generally speaking:

  • It may be a good idea to calibrate the intrinsic parameters camera-by-camera first: you can do that by collecting images of a calibration target (e.g. a checkerboard) for each camera independently of the others, but after you have them in place and with the optics locked to their final settings. This way you'll only need to solve for the extrinsics in the final camera network.
  • You'll very likely need a 3D target/rig, manufactured with known geometry, to calibrate the relative poses. A fairly common kind consists of a set of three planar rings intersecting each other at 90 deg angles, with checkerboard-like markers on them.
  • Alternatively, you could use a single planar target, and rotate it by known angles, e.g using a turntable.
  • You will likely need to run a final bundle adjustment with all parameters optimized together for the final "tightening" of the solution. Lookup Google's "ceres" solver for an excellent and free one.
许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top