문제

Forgive me if this question has been asked before, but I'd like to know where to start to achieve stereo vision to convert 2D coordinates into 3D coordinates. I'm trying to track balls like Hawkeye in 3D. I have two high speed cameras and I'm able to detect the ball in each camera. I understand that I need to calibrate the cameras, synchronize them and run some algorithm to remove lens distortion etc. However, I don't know what the next step is to convert the 2D coordinates to world 3D coordinates.

Does anybody know how to perform triangulation who can assist me with this? Also the cameras will not be parallel to each other, but at different angles, so somehow, I need to know the location of each camera in terms of their 3D coordinates.

Any help with this would be gratefully received.

Many thanks

도움이 되었습니까?

해결책

To convert 2D into 3D for two calibrated cameras you would use these formulas: z = focal*baseline/disparity x = z*u/focal y = z*v/focal

where focal - the focal length of you camera in pixels u = column-Cx, Cx~image_width/2 but calibration will give you more precise value v = -row+Cy, Cy~image_height/2 Baseline - horizontal distance between cameras disparity - difference in horizontal position of the ball in two images

Strictly speaking you need to do rectification only for working with dense stereo. For sparse stereo you only need calibration.

다른 팁

I have recently found fragment of 'Learning OpenCV' book. It seems to be a good source of knowledge about either theory behind and implementation in opencv. Although, the API in book is outdated, the general mechanism still is up-to-date. To sum up, they recommend:

  1. Remove distortion (you can do this already if you have calibrated cameras)
  2. Adjust distances and angles between cameras
  3. Find the same features on both images
  4. Estimate depth by simple calculation considering object's position on an image.
라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top