Domanda

I'm using SharpDX to write a "3D soft engine from scratch" as per a tutorial on MSDN. However, I cannot get my expected and actual fields of view to match up.

My now-simplified world comprises of four 3D vertices with Z=0 at (0,1,0);(0,-1,0);(1,0,0);(-1,0,0). I place the camera at (0,0,2) and look at (0,0,0) with (0,1,0) up. I setup the projection matrix with a FOV of 90 degrees (PI/2 radians) and prepare to render each of the four vertices by calling Vector3.TransformCoordinate(Vector3, Matrix).

My understanding is that FOV is used to calculate a scale factor 1/TAN(FOV/2) that can be applied to every (Y,Z) pair such that y' = y * scale / z. Similarly, given the aspect ratio, as scale is calculated for (X,Z) pairs too.

For a (vertical) FOV of 90 degrees and camera distance 2, I would have expected all of my vertices to be reasonably far from the edges of the bitmap. However, the top and bottom vertices have Y-values of -0.49 and +0.49 which means that they'd be practically touching the screen boundary if rendered. Have I misunderstood the concept of FOV? The results I'm seeing are what I'd expect from a FOV of about 53 degrees. It's as though the tangent is being halved before its inverse is taken...

I first tried this with the camera at (0,0,1) and FOV of 90 degrees because I felt sure that the scale factor would be 1 (1/TAN(90/2) = 1) but the vertices were off screen.

È stato utile?

Soluzione

The perspective transformation transforms coordinates from camera space to clip space.

Camera space is an ordinary cartesian system with the camera as the origin.

Clip space is a temporary space that is afterwards mapped to the viewport. Both x and y coordinates are in the range [-1, 1] for points that are within the viewport.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top