문제

I would like some help in continuing my code using openCV library in order to find the depth values of objects seen in the cameras.

I have already done the calibration and found the dispaity map, i can't find a clear help of how to calculate the depth values of each pixel seen in the two photos taken by the cameras.

Can anyone help me? Thank you

도움이 되었습니까?

해결책

Here is a link for your problem including a simple algorithm for depth estimation: http://www.epixea.com/research/multi-view-coding-thesisse13.html

다른 팁

You can use these formula to calculate point cloud 3D coordinates:

Z = fB/D
X = (col-w/2)*Z/f
Y = (h/2-row)*Z/f

where X, Y, Z are world coordinates, f - focal length of the camera in pixels after calibration, B is a base line or camera separation and D is disparity; col, row represent the column and row coordinates of a pixel in the image with dimensions h, w.

However, if you managed to calibrate your cameras and get a disparity map you have to already know this. Calibration and disparity map calculation is an order of magnitude more complex task than above mentioned calculations.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top