Question

I've been working SSAO in OpenGL. I decided to implement SSAO from this tutorial in OpenGL for my deferred renderer. Unfortunately I've been unable to get it working well. The areas that are darkened by SSAO change greatly depending on the camera's position. I understand there might be some variation in the output of SSAO when the camera moves, but this is much greater than I have observed in other implementations of SSAO.

Here is the fragment shader code

void main() {

    vec3 origin = positionFromDepth(texture2D(gDepth, samplePosition));
    vec3 normal = texture2D(gNormal, samplePosition).xyz; //multiplying this 
//by 2 and subtracting 1 doesn't seem to help
    vec2 random = getRandom(samplePosition);

    float radius = uRadius/origin.z;
    float occlusion = 0.0;
    int iterations = samples/4;

    for (int i = 0; i<iterations; i++) {
        vec2 coord1 = reflect(kernel[i], random)*radius;
        vec2 coord2 = vec2(coord1.x*0.707 - coord1.y*0.707, coord1.x*0.707 + coord1.y*0.707);
        occlusion += occlude(samplePosition, coord1 * 0.25, origin, normal);
        occlusion += occlude(samplePosition, coord2 * 0.50, origin, normal);
        occlusion += occlude(samplePosition, coord1 * 0.75, origin, normal);
        occlusion += occlude(samplePosition, coord2, origin, normal);
    }

    color = vec4(origin, 1);

}

The positionFromDepth() function:

vec3 positionFromDepth(float depth) {
    float near = frustrumData.x;
    float far = frustrumData.y;
    float right = frustrumData.z;
    float top = frustrumData.w;
    vec2 ndc;           
    vec3 eye;             
    eye.z = near * far / ((depth * (far - near)) - far);
    ndc.x = ((gl_FragCoord.x/buffersize.x) - 0.5) * 2.0; 
    ndc.y = ((gl_FragCoord.y/buffersize.y) - 0.5) * 2.0;
    eye.x = (-ndc.x * eye.z) * right/near;
    eye.y = (-ndc.y * eye.z) * top/near;
    return eye;
}

And the occlude() function:

float occlude(vec2 uv, vec2 offsetUV, vec3 origin, vec3 normal) {
    vec3 diff = positionFromDepth(texture2D(gDepth,(uv+offsetUV)))-origin;
    vec3 vec = normalize(diff);
    float dist = length(diff)/scale;
    return max(0.0,dot(normal,vec)-bias)*(1.0/(1.0+dist))*intensity;
}

I have a feeling the problem could be in the positionFromDepth() function, except that I use the same code for the lighting stage of the renderer which works perfectly (I think). I've been over this code a thousand times and haven't found anything that stands out as wrong. I've tried a variety of values for bias, radius, intenisty, and scale, but that doesn't seem to be the problem. I am worried either my normals or positions are wrong, so here are some screen shots of them:

The reconstructed position: enter image description here And the normal buffer: enter image description here

I would include an image of the occlusion buffer, but the problem is mostly only obvious when the camera is moving, which an image wouldn't help to show.

Does anyone have any idea what's wrong here?

Was it helpful?

Solution

It is strange that multiplying by 2 and subtracting 1 is not helping with your normal map. This is generally done to overcome issues associated with storing normals in unsigned/normalized texture formats. Unless your normal G-Buffer is a signed/unnormalized format, you probably need to pack and unpack your normals using * 0.5 + 0.5 when you first write to and * 2.0 - 1.0 when you sample the texture.

In any case, there are multiple approaches to SSAO and many do not even use surface normals at all. So the discussion of which vector space you store your normals in is often overlooked.

I strongly suspect that your normals are in view space, rather than world space. If you multiplied your normal by the "normal matrix" in your vertex shader, like many tutorials will have you do, then your normals will be in view space.

It turns out that view space normals really are not that useful, with the number of post-processing effects these days that work better using world space normals. Most modern deferred shading engines (e.g. Unreal Engine 4, CryEngine 3, etc.) store the normal G-Buffer in world space and then transform it into view space (if needed) in the pixel shader.


By the way, I have included some code that I use to reconstruct the object space position from the traditional depth buffer. You appear to be using view space position / normals. You might want to try everything in object/world space.

flat in mat4 inv_mv_mat;
     in vec2 uv;

...

float linearZ (float z)
{
#ifdef INVERT_NEAR_FAR
  const float f = 2.5;
  const float n = 25000.0;
#else
  const float f = 25000.0;
  const float n = 2.5;
#endif

  return n / (f - z * (f - n)) * f;
}

vec4
reconstruct_pos (in float depth)
{
  depth = linearZ (depth);

  vec4 pos = vec4 (uv * depth, -depth, 1.0); 
  vec4 ret = (inv_mv_mat * pos);

  return ret / ret.w;
}

It takes a little additional setup in the vertex shader stage of the deferred shading lighting pass, which looks like this:

#version 150 core

in       vec4 vtx_pos;
in       vec2 vtx_st;

uniform  mat4 modelview_mat; // Matrix used when the G-Buffer was built
uniform  mat4 camera_matrix; // Matrix used to stretch the G-Buffer over the viewport

uniform float buffer_res_x;
uniform float buffer_res_y;

     out vec2 tex_st;
flat out mat4 inv_mv_mat;
     out vec2 uv;


// Hard-Coded 45 degree FOV
//const float fovy = 0.78539818525314331; // NV pukes on the line below!
//const float fovy = radians (45.0);
//const float tan_half_fovy = tan (fovy * 0.5);

const float   tan_half_fovy = 0.41421356797218323;

      float   aspect        = buffer_res_x / buffer_res_y;
      vec2    inv_focal_len = vec2 (tan_half_fovy * aspect,
                                    tan_half_fovy);

const vec2    uv_scale     = vec2 (2.0, 2.0);
const vec2    uv_translate = vec2 (1.0, 1.0);


void main (void)
{
  inv_mv_mat  = inverse (modelview_mat);
  tex_st      = vtx_st;
  gl_Position = camera_matrix * vtx_pos;
  uv          = (vtx_st * uv_scale - uv_translate) * inv_focal_len;
}
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top