Question

I had an idea for fog that I would like to implement in OpenGl: After the scene is rendered, a quad is rendered over the entire viewport. In the fragment shader, this quad samples the depth buffer at that location and changes its color/alpha in order to make that pixel as foggy as needs be.

Now I know I can render the scene with the depth buffer linked to a texture, render the scene normally and then render the fog, passing it that texture, but this is one rendering too many. I wish to be able to either

  • Directly access the current depth buffer from the fragment shader
  • Be able to render the scene once, both to the normal depth buffer/screen and to the texture for fog.

Is this possible?

Was it helpful?

Solution

What you're thinking of (accessing the target framebuffer for input) would result in a feedback loop which is forbidden.

(…), but this is one rendering too many.

Why do you think that? You don't have to render the whole scene a new, just the fog overlay on top of it.

I wish to be able to either

  • Directly access the current depth buffer from the fragment shader

If you want to access only the depth of the newly rendered fragment, just use gl_FragCoord.z, this variable (that should only be read to keep performance) holds the depth buffer value the new fragment will have.

See the GLSL Specification:

The variable gl_FragCoord is available as an input variable from within fragment shaders and it holds the window relative coordinates (x, y, z, 1/w) values for the fragment. If multi-sampling, this value can be for any location within the pixel, or one of the fragment samples. The use of centroid in does not further restrict this value to be inside the current primitive. This value is the result of the fixed functionality that interpolates primitives after vertex processing to generate fragments. The z component is the depth value that would be used for the fragment’s depth if no shader contained any writes to gl_FragDepth. This is useful for invariance if a shader conditionally computes gl_FragDepth but otherwise wants the fixed functionality fragment depth.

  • Be able to render the scene once, both to the normal depth buffer/screen and to the texture for fog.

What's so wrong with first rendering the scene normally, with depth going into a separate depth texture attachment, then render the fog, finally compositing them. The computational complexity does not increase by this. Just because it's more steps, it's not doing more work that in your imagined solution, since the individual steps become simpler.

OTHER TIPS

distance camera-pixel:

float z = gl_FragCoord.z / gl_FragCoord.w;

the solution you think to is a common solution, but no need of a supplementary sampling with a quad, everything is already there to compute fog in one pass if depth buffer is enable:

here is a an implementation

const float LOG2 = 1.442695;
float z = gl_FragCoord.z / gl_FragCoord.w;
float fogFactor = exp2( -gl_Fog.density * 
                   gl_Fog.density * 
                   z * 
                   z * 
                   LOG2 );
fogFactor = clamp(fogFactor, 0.0, 1.0);

gl_FragColor = mix(gl_Fog.color, finalColor, fogFactor );
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top