Question

As I understand it, shadow-mapping is done by rendering the scene from the perspective of the light to create a depth map. Then you re-render the scene from the POV of the camera, and for each point (fragment in GLSL) in the scene you calculate the distance from there to the light source; if it matches what you have in your shadow map, then it's in the light, otherwise it's in the shadow.

I was just reading through this tutorial to get an idea of how how to do shadow mapping with a point/omnidirectional light.

Under section 12.2.2 it says:

We use a single shadow map for all light sources

And then under 12.3.6 it says:

1) Calculate the squared distance from the current pixel to the light source.
...
4) Compare the calculated distance value with the fetched shadow map value to determine whether or not we're in shadow.

Which is roughly what I stated above.

What I don't get is if we've baked all our lights into one shadow map, then which light do we need to compare the distance to? The distance baked into the map shouldn't correspond to anything, because it's a blend of all the lights, isn't it?

I'm sure I'm missing something, but hopefully someone can explain this to me.


Also, if we are using a single shadow map, how do we blend it for all the light sources?

For a single light source the shadow map just stores the distance of the closest object to the light (i.e., a depth map), but for multiple light sources, what would it contain?

Was it helpful?

Solution

You've cut the sentence short prematurely:

We use a single shadow map for all light sources, creating an image with multipass rendering and performing one pass for each light source.

So the shadow map contains the data for a single light source at a time but they use only one map because they render only one light at a time.

I think this flows into your second question — light is additive so you combine the results from multiple lights simply by adding them together. In GPU Gems' case, they add together directly in the frame buffer, no doubt because of the relatively limited number of storage texture samplers available on GPUs at the time. Nowadays you probably want to do a combination of combining in the frame buffer and directly in the fragment shader.

You also generally apply the test of "pixel is lit if it's less than or equal to the distance in the shadow buffer plus a little bit" rather than exactly equal, due to floating point rounding error accumulation.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top