Question

I would like to calculate the z-buffer of an object position from the output of glm::project. The calculation of the z-buffer in the code below is from https://en.wikipedia.org/wiki/Z-buffering.

What I've tried

int windowWidth = 800;
int windowHeight = 600;
float positions[] = {-42.5806f, 27.8838f, 49.9729f} // Example point
glm::mat4 model;
glm::mat4 view;
glm::mat4 proj;

view = glm::lookAt(
    glm::vec3( 0.0f, 0.0f, 2.0f ),
    glm::vec3( 0.0f, 0.0f, 0.0f ),
    glm::vec3( 0.0f, 1.0f, 0.0f )
    );

proj = glm::perspective( 45.0f, aspect, 0.1f, 10.0f );

// Get screen coordinates from an object point
glm::vec3 screenCoords = glm::project(
   glm::vec3( positions[0], positions[1] , positions[2] ),
   view*model,
   proj,
   glm::vec4( 0, 0, windowWidth, windowHeight )
);

// Calculating the z-buffer
int zFar = 10;
int zNear = 0.1;
float zBufferValue = (zFar+zNear ) / ( zFar-zNear ) + ( 1/screenCoords.z) * ( ( -2*zFar*zNear ) / ( zFar - zNear ) );

The problem

The value of zBufferValue is 1 no matter how I rotate my model or which point I use. According to the wiki page the value should be between -1 (the near plane) and 1 (the far plane).

What am I doing wrong in my calculation?

Was it helpful?

Solution

Your last line of code is redundant. The depth calculation is done during the projection transform (and the subsequent perspective divide). Essentially what glm::project does is this:

// P:  projection matrix
// MV: modelview matrix
// v:  vertex to convert to screen space

vec4 result = P * MV * vec4(v, 1.0f);
result /= result.w;     // perspective divide

// convert X/Y/Z from normalized device coords to screen coords...
result.z = (result.z + 1.0f) * 0.5f;
// ...

return vec3(result);

It also converts the X/Y coordinates from normalized device space [(-1, -1), (1, 1)] to screen space [(0, 0), (viewport_width, viewport_height)] but since you're only concerned with the depth buffer, I left that step out above.

So ignoring the last 3 lines of your code, screenCoords.z should contain a depth buffer value equivalent to what you would get from glReadPixels.

Of course, the actual bits stored on the graphics card depend on the size of your depth buffer and how OpenGL is set up to use it. In particular, if you are using a custom glDepthRange the values above will be different from what is stored in the depth buffer.

OTHER TIPS

You applied the formula in that Wikipedia article to the wrong values. You already applied the projection matrix with glm::project, which is what the z' = ... formula does. So you basically apply the projection matrix twice in your code.

The depth buffer values in OpenGL are in window coordinates, and they are in the range [n,f], where n and f are set using glDepthRange(n, f) (defaults are 0 and 1). You can read it up in 13.6.1 in the spec. These values have nothing to do with the zNear and zFar value used in the projection matrix.

glm::project simply assumes these default values, and, since it outputs window coordinates, that's the value that's written to the depth buffer. So the correct code is simply:

float zBufferValue = screenCoords.z;
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top