Pergunta

I'm trying to make a self-shadowed object (opengl/glsl). The steps I did were these:

1- Render the scene from light to obtain a depth map

2- Render from camera position and calculate the distance of each point to the light source, if the distance of the point to light is greater than the depth stored in the depth map, the point is in shadow.

but then appears the problem of shadow acne. so, second step would go like:

2- ...if( abs(DISTANCE_OF_THE_POINT_TO_LIGHT - DEPTH_STORED_IN_THE_DEPTH_MAP) > BIAS ) the point is in shadow.

but still it doesn't give me good results (no suitable bias value).

so, I implemented woo trick (finding the first and second surfaces from light, and then store the midpoint in the depth buffer). To do so, i did a two-pass Depth Peeling. The steps are:

1- Render the scene from light to obtain a 1st depth map, as usual.

2- Render the scene from light to obtain a 2nd depth map, if the distance to light of a point is equal to the distance in the 1st depth map, discard such point (that is, don't render the first layer).

3- Store as your final depth map (1st depth map + 2nd depth map)/2

4- Render from camera position and calculate the distance of each point to the light source, if the distance of the point to light is greater than the depth stored in the depth map, the point is in shadow.

the problem appears now in step 2, the SAME problem we had before: when the first and second layer are close there are rounding errors and some sort of acne also appears, and there is no suitable bias value neither. so, i'm getting no benefit from woo algorithm, it's just moving the problem to the depth peeling part.

how can you solve this ?

Foi útil?

Solução

I'm getting no benefit from woo algorithm, it's just moving the problem to the depth peeling part.

The Z-fighting problem exists in both depth-peeling and shadow mapping but the shadow mapping case is much worse.

With shadow mapping you are in camera space where you compute a z value and a sample location (x,y) in the shadow map space. The problem is that often, there's no sample (x,y) in the shadow map space so you take the closest sample to (x,y) then you compare the shadow depth to your z value.

Unlike shadow mapping, in depth-peeling you are comparing z values at the very same sample location (same shadow map space). It is actually an excellent solution to fight shadow artifacts if you can afford it but it won't help in all situations for example:

If you have a big object on screen, let's say a house. The projection of the house on the shadow map could lie within few or even a single sample (one pixel) and in this case all z values computed for the house will be compared to the same z value in the shadow map. Conversely, one pixel on screen could correspond to an enormous region in the shadow map.

Suggestion to improve your results:

Make sure DEPTH_STORED_IN_THE_DEPTH_MAP is computed exactly the same way as DISTANCE_OF_THE_POINT_TO_LIGHT. Pay particular attention to hidden fixed pipeline arithmetic. Either use them for both DEPTH_STORED_IN_THE_DEPTH_MAP and DISTANCE_OF_THE_POINT_TO_LIGHT or never use them.

Use 32 bits floating point textures to store the depth.

Try percentage closer filter PCF.

Depth-peeling gives you an adaptive BIAS. You can also have an adaptive resolution: Cascaded Shadow Maps.

Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top