Question

I am reading an OpenGl book which points out two seemingly contradictory statements:

  1. The camera is positioned at the origin and stares down the negative z-axis.
  2. The depth buffer clears to a default value of 1.0, and pixels with lower Z values pass the visibility test to appear onscreen.

While I understand the concept of the depth buffer, I do not understand how a Z that is less than another is determined to be "in front" of a higher Z value for the purposes of passing the depth buffer test.

I would expect that higher Z values are "in front" and thus win the depth buffer comparison. What am I missing here?

Was it helpful?

Solution

The value in the Z-buffer is in range [0.0,1.0] and 1.0 is the farther distance that is actually contained in the frustum (anything beyond is not visible).

This means that when drawing a pixel of a shape, if its Z-value is lower than the current one, it is nearer to the eye than what's was there (nothing or just another pixel), so Z is updated and pixel is drawn. Otherwise pixel is discarded.

OTHER TIPS

After the model-view transformation all points are in "view coordinates" where the camera is at the origin looking down the -z axis. The points then pass through the projection transformation and are then in "clip coordinates" and here the z-values are mapped to the range [-1,+1]. Points on the near-plane will end up with a z-value of -1 and points on the far-plane plane with end up with a z-value of +1.

This is true for all projections. The pipeline expects all coordinate values to be in the range [-1,+1] after clipping and perspective division -- here the points are in normalized device coordinates.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top