سؤال

I'm debugging a problem with SSAO and try to visualise my depth buffer. Here's the result: enter image description here I'm storing the depth and normals in a single 16-bit RGBA texture. This is my depth pass shader:

// Vertex shader
#version 150 core
#extension GL_ARB_explicit_attrib_location : enable

uniform mat4 _ViewMatrix;
uniform mat4 _ViewProjectionMatrix;
uniform mat4 modelMatrix;

layout (location = 0) in vec4 aPosition;
layout (location = 2) in vec3 aNormal;

out vec4 vPosition;
out vec3 vNormal;

void main()
{
    gl_Position = _ViewProjectionMatrix * modelMatrix * aPosition;

    mat4 modelViewMatrix = _ViewMatrix * modelMatrix;

    vPosition = modelViewMatrix * aPosition;
    vNormal = mat3( modelViewMatrix ) * aNormal;
}

// Fragment shader.
#version 150 core

// Calculated as 1.0 / (far - near)
uniform float uLinearDepthConstant;

in vec4 vPosition;
in vec3 vNormal;

out vec4 outDepthNormal;

void main()
{
    float linearDepth = -vPosition.z * uLinearDepthConstant;

    outDepthNormal = vec4( linearDepth, normalize( vNormal ) );
}

Then I visualise the depth in a shader that renders the texture (I've hard-coded the near and far plane distances):

void main()
{
    float depth = texture( depthNormalMap, vTexCoord ).r;
    fragColor = vec4((2.0 * 1.0) / (200.0 + 1.0 - depth * (200.0 - 1.0)));
}

Should the result appear smooth or what could be the problem? I'm creating the texture like this:

glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA16F, width, height, 0, GL_RGBA, GL_HALF_FLOAT, 0 );
هل كانت مفيدة؟

المحلول

If you consider the fact that no hardware actually offers a 16-bit floating-point depth buffer, this looks about right.

Some consoles (e.g. Xbox 360) expose a 24-bit floating-point depth buffer, but generally to do things portably you have to use a 32-bit depth buffer in order to have it represented in floating-point. This necessitates an ugly 64-bit packed Depth + Stencil format if you need a stencil buffer and a floating-point depth buffer, which is part of the reason ATI hardware in D3D and on the Xbox 360 offers a 24-bit floating-point + 8-bit depth+stencil format that is not available in OpenGL. If GL had this, it would be called GL_DEPTH24F_STENCIL8.

Now, while a 16-bit floating-point value is rather inadequate to store your depth, it is probably wasting space / memory bandwidth to store your normals in this format. You should try a format like RGBA 10:10:10:2 (fixed-point) or RGB 11:11:10 (floating-point) to store your normals (if RGBA8 is actually inadequate), and then with the space you saved you could afford a 32-bit floating-point depth buffer.

As it stands right now you are not actually using a floating-point depth buffer in the first place. You are trying to pack your depth into one channel of a 16-bit per-component color-buffer. I would suggest you use an actual depth attachment and use a 24-bit fixed-point or 32-bit depth image format if your current solution is not cutting it. You already have to output something to a dedicated depth buffer during normal rendering anyway.

Right off the bat by using a floating-point format to store your depth you lose 1-bit of precision because it has to store a rather meaningless sign bit (fixed-point depth buffers do not). Also, since depth values are already in the range 0-1 generally, the enhanced range of a floating-point number really does not apply. At 16-bit, you lose more than you gain by storing the depth in a floating-point format. Fixed-point depth is really the best way to go when you use fewer bits to store the depth.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top