Question

Normally in depth sensing for 3D measurement in imaging, people use stereo approach using two cameras. But I have seen some applications and they use three cameras for depth measurement. But idea of image processing algorithm is similar. I am wondering why some use three cameras instead of two. Is it because of more accuracy in depth sensing? Thanks

Was it helpful?

Solution

If you study the history of a famous camera company Point Grey you will find a triclops - three stereo cameras set up horizontally and vertically in a triangular pattern that supposedly allow you to get better matching. This is because horizontally shifted cameras prefer vertical gradient and vertically shifted ones prefer horizontal gradient for matching.

Happened to be a waste of hardware since usually both directions of gradients are present in a correlation window. Another early mistake was to use color for stereo matching (looks like an attractive option but it adds more noise and variability than help). Point grey now has 'digiclops' with two cameras that are grey, not color.

If you see three cameras they are typically lined up horizontally to provide a choice for a wide and narrow baseline. A narrow one is good for close objects while the wide one would have a longer 'dead zone' (i.e. absence of stereo overlap) but distinguishes depth at longer distances.

Stereo cameras never pick up for another reason - absence of texture creates huge holes in disparity maps. Kinect happened to a winner here because it projects its own texture (though it cannot do this at sunlight).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top