سؤال

Position reconstruction

I want to verify that this is a valid method and I'm not overlooking something.

I am using a spherical mesh which I am using to only render the portion of the screen that the light overlaps. I rendering only the back-faces if the depth is greater or equal the depth buffer as suggested here.

To reconstruct the camera space position of a fragment I am taking the vector from the camera space fragment on the light volume, normalizing it, and scaling it by the linear depth from my gbuffer (which is stored as a 32 bit float). This is sort of a hybrid of the methods discussed here (using linear depth) and here (spherical light volumes).

position_reconstruction


Banding

The reason I ask is because the results I get from deferred vs forward for light attenuation are different.

Deferred deferred

Forward forward

Attenuation is linked to my camera space position as I calculate attenuation as follows:

vec3 light_dir_to = curr_light.camera_space_position - surface_pos_cam;
float light_dist_sq = dot(light_dir_to, light_dir_to);

float light_attenuation_factor = 1.0f - ((1.0f / (curr_light.radius * curr_light.radius)) * light_dist_sq);
light_attenuation_factor = clamp(light_attenuation_factor, 0.0f, 1.0f);
light_attenuation_factor = pow(light_attenuation_factor, curr_light.falloff);

The difference isn't super noticeable in these instances, but the instance I try to scale the light (ex. raise it to a power to make it fade out faster), the effects become immediately apparent.

light_atten = pow(light_atten, 2.0f)

attenpow2

My problem may lie elsewhere, but I want to verify that my position reconstruction method isn't flawed in some way I'm overlooking.


EDIT

Posting my gbuffer setup as requested.

enum render_targets { e_dist_32f = 0, e_diffuse_rgb8, e_norm_xyz10, e_spec_intens_b8_spec_pow_a8, e_light_rgb8, num_rt };
//...
GLint internal_formats[num_rt] = {  GL_R32F, GL_RGBA8, GL_RGB10_A2, GL_RGBA8, GL_RGBA8 };
GLint formats[num_rt]          = {   GL_RED,  GL_RGBA,     GL_RGBA,  GL_RGBA,  GL_RGBA };
GLint types[num_rt]            = { GL_FLOAT, GL_FLOAT,    GL_FLOAT, GL_FLOAT, GL_FLOAT };
for(uint i = 0; i < num_rt; ++i)
{
  glBindTexture(GL_TEXTURE_2D, _render_targets[i]);
  glTexImage2D(GL_TEXTURE_2D, 0, internal_formats[i], _width, _height, 0, formats[i], types[i], nullptr);
}
// Separate non-linear depth buffer used for depth testing
glBindTexture(GL_TEXTURE_2D, _depth_tex_id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, _width, _height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, nullptr);

NOTE: This issue occurs on planar surfaces that have one normal for the whole surface, thus this cannot be a loss of precision with normals.


FINAL EDIT - SOLUTION

It appears as though this method is in-fact valid (as mentioned by GuyRT). The banding issue appears to be coming from how I am doing gamma correction.

For my forward renderer I only have one loop over 8 lights (I don't do multiple passes, 1 pass only), and I apply gamma correction right after the lighting calculations.

For my deferred renderer I do all lighting calculations, post-processing, etc., then convert to gamma. The issue here is that I:

  1. Do my lighting calculations in linear RGB space
  2. Store it in a texture in RGB space (with only 8 bits of precision)
  3. When lighting is done, gamma correct the value and copy it to the back buffer.

For example, let's say the lighting calculations for two fragments have the final values 1/255 (~0.003) and 2/255 (~0.007) in sRGB space (as presented in the end). These values in RGB space are (1/255)^2.2 = ~0.000006 and (2/255)^2.2 = ~0.00002. When these values are stored to my lighting accumulating texture, they are both stored as the same value, 0. This is the cause of the banding.

Converting my lighting accumulation texture to GL_R11F_G11F_B10F has yielded results that are very close to my forward renderer. The answers for these two questions helped me once I found that gamma was the issue: sRGB textures. Is this correct? and When to call glEnable(GL_FRAMEBUFFER_SRGB)?.

The final result with a "falloff" of 4.0

final_result


EXTRA RESOURCE

I just found out this effect is called "Gamma Banding", which makes sense. This website has some useful charts and this video has a nice numerical walkthrough.

هل كانت مفيدة؟

المحلول

With a bit of tweaking, I think your method is valid and feasible.

This looks very much like the same artefact discussed here. It is caused by a loss of precision in your g-buffer normals. The solution in that case was to use the GL_RGB10_A2 format to store normals.

If you're interested, there is quite a thorough a discussion of alternative representations for g-buffer normals here: http://aras-p.info/texts/CompactNormalStorage.html, although is is a bit old, so ALU/bandwidth trade-offs might be different today. Also, I think he makes a (quite common) mistake in his discussion of view-space normals, the z-component of which can be negative.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top