I'm getting no benefit from woo algorithm, it's just moving the problem to the depth peeling part.
The Z-fighting problem exists in both depth-peeling and shadow mapping but the shadow mapping case is much worse.
With shadow mapping you are in camera space where you compute a z
value and a sample location (x,y
) in the shadow map space. The problem is that often, there's no sample (x,y
) in the shadow map space so you take the closest sample to (x,y
) then you compare the shadow depth to your z
value.
Unlike shadow mapping, in depth-peeling you are comparing z
values at the very same sample location (same shadow map space). It is actually an excellent solution to fight shadow artifacts if you can afford it but it won't help in all situations for example:
If you have a big object on screen, let's say a house. The projection of the house on the shadow map could lie within few or even a single sample (one pixel) and in this case all z
values computed for the house will be compared to the same z
value in the shadow map. Conversely, one pixel on screen could correspond to an enormous region in the shadow map.
Suggestion to improve your results:
Make sure DEPTH_STORED_IN_THE_DEPTH_MAP
is computed exactly the same way as DISTANCE_OF_THE_POINT_TO_LIGHT
. Pay particular attention to hidden fixed pipeline arithmetic. Either use them for both DEPTH_STORED_IN_THE_DEPTH_MAP
and DISTANCE_OF_THE_POINT_TO_LIGHT
or never use them.
Use 32 bits floating point textures to store the depth.
Try percentage closer filter PCF.
Depth-peeling gives you an adaptive BIAS. You can also have an adaptive resolution: Cascaded Shadow Maps.