Given 2 images taken from the same location and FOV, and only the orientation changing (yaw, pitch, roll), How can I calculate the FOV of the image using matching pixels identified using a feature matching algorithm?
Given that a matching set of pixels in each image should unproject (pixel location transformed by the inverse Model-View-Projection matrix) to the same unit vector, I am assuming I can solve for F in the the perspective projection matrix and from there calculate the FOV.
Note: The location and orientation angles of the camera are known.
Solving for the the Perspective projection matrix has me solving for F where VecA/|VecA| = VecB/|VecB|. I am not sure how to solve that because the vectors need to be equal when they are normalized. The vector components in VecA and VecB break down to the following:
mv = inverse Model-View matrix for image
width = width of image in pixels
height = height of image in pixels
px = pixel location of matching feature
F = tan(FOV/2)
Vec.X = (mv.m11 * F * px.x) + (mv.m12 * F * height/width * px.y) - mv.m13
Vec.Y = (mv.m21 * F * px.x) + (mv.m22 * F * height/width * px.y) - mv.m23
Vec.Z = (mv.m31 * F * px.x) + (mv.m32 * F * height/width * px.y) - mv.m33
Example Data:
width = 704
height = 480
pxA = 0.28409090909090917, -0.16666666666666663
pxB = 0.0, 0.0
mvA = -0.6560590289905072, -0.0902630302223078, -0.7492924234603735,
-0.7547095802227721, 0.0784644550873574, 0.6513499664602825,
0.0, 0.9928221969028145, -0.11959968786359426
mvB = -0.7116393973440803, -0.06257611232965804, -0.6997525264787975,
-0.7025449225122575, 0.06338616284802252, 0.7088108536215074,
0.0, 0.9960253131949559, -0.08907062071687842
F = 0.2679491924311227
VecA = 0.7021003426128247, -0.7111890623383523, 0.08936947128543493
VecB = 0.6997525264787975, -0.7088108536215074, 0.08907062071687842
|VecA|= 1.0033552092278133
|VecB|= 1.0