Question

I am trying to compute the 3D coordinates from several pair of two view points.
First, I used the matlab function estimateFundamentalMatrix() to get the F of the matched points (Number > 8) which is:

F1 =[-0.000000221102386   0.000000127212463  -0.003908602702784
     -0.000000703461004  -0.000000008125894  -0.010618266198273
      0.003811584026121   0.012887141181108   0.999845683961494]

And my camera - taken these two pictures - was pre-calibrated with the intrinsic matrix:

K = [12636.6659110566, 0, 2541.60550098958
     0, 12643.3249022486, 1952.06628069233
     0, 0, 1]

From this information I then computed the essential matrix using:

E = K'*F*K

With the method of SVD, I finally got the projective transformation matrices:

P1 = K*[ I | 0 ] 

and

P2 = K*[ R | t ]

Where R and t are:

R = [ 0.657061402787646 -0.419110137500056  -0.626591577992727
     -0.352566614260743 -0.905543541110692   0.235982367268031
     -0.666308558758964  0.0658603659069099 -0.742761951588233]

t = [-0.940150699101422
      0.320030970080146
      0.117033504470591]

I know there should be 4 possible solutions, however, my computed 3D coordinates seemed to be not correct.
I used the camera to take pictures of a FLAT object with marked points. I matched the points by hand (which means there should not be obvious mistake exists about the raw material). But the result turned out to be a surface with a little bit banding.
I guess this might be due to the reason pictures did not processed with distortions (but actually I remember I did).

I just want to know whether this method to solve the 3D reconstruction issue right? Especially when we already know the camera intrinsic matrix.

Edit by JCraft at Aug.4: I have redone the process and got some pictures showing the problem, I will write another question with detail then post the link.

Edit by JCraft at Aug.4: I have posted a new question: Calibrated camera get matched points for 3D reconstruction, ideal test failed. And @Schorsch really appreciate your help formatting my question. I will try to learn how to do inputs in SO and also try to improve my gramma. Thanks!

Was it helpful?

Solution

If you only have the fundamental matrix and the intrinsics, you can only get a reconstruction up to scale. That is your translation vector t is in some unknown units. You can get the 3D points in real units in several ways:

  • You need to have some reference points in the world with known distances between them. This way you can compute their coordinates in your unknown units and calculate the scale factor to convert your unknown units into real units.
  • You need to know the extrinsics of each camera relative to a common coordinate system. For example, you can have a checkerboard calibration pattern somewhere in your scene that you can detect and compute extrinsics from. See this example. By the way, if you know the extrinsics, you can compute the Fundamental matrix and the camera projection matrices directly, without having to match points.
  • You can do stereo calibration to estimate the R and the t between the cameras, which would also give you the Fundamental and the Essential matrices. See this example.

OTHER TIPS

Flat objects are critical surfaces, not possible to achive your goal from them. try adding two (or more) points off the plane (see Hartley and Zisserman or other text on the matter if still interested)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top