Domanda

I would like some help in continuing my code using openCV library in order to find the depth values of objects seen in the cameras.

I have already done the calibration and found the dispaity map, i can't find a clear help of how to calculate the depth values of each pixel seen in the two photos taken by the cameras.

Can anyone help me? Thank you

È stato utile?

Soluzione

Here is a link for your problem including a simple algorithm for depth estimation: http://www.epixea.com/research/multi-view-coding-thesisse13.html

Altri suggerimenti

You can use these formula to calculate point cloud 3D coordinates:

Z = fB/D
X = (col-w/2)*Z/f
Y = (h/2-row)*Z/f

where X, Y, Z are world coordinates, f - focal length of the camera in pixels after calibration, B is a base line or camera separation and D is disparity; col, row represent the column and row coordinates of a pixel in the image with dimensions h, w.

However, if you managed to calibrate your cameras and get a disparity map you have to already know this. Calibration and disparity map calculation is an order of magnitude more complex task than above mentioned calculations.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top