質問

I have confused about bias matrix in shadow mapping. According this question: bias matrix in shadow mapping, bias matrix is used to scale down and translate to [0..1]x and [0..1]y. So I image that if we don't use bias matrix, the texture would be filled by only 1/4 scene size? Is that true? Or are there some magic here?

役に立ちましたか?

解決

Not entirely, but the result is the same. As the answer from the question you linked said. After the w divide your coordinates are in NDC space, ergo in the range [-1, 1] (x, y and z). Now when you're sampling from a texture the coordinates you should give are in 'texture space', and OpenGL defined that space to be in the range [0, 1] (at least for 2D textures). x=0 y=0 being the bottom left of the texture, and x=1 y=1 the top right of the texture.

This means, when you are going to sample from your rendered depth texture, you have to transform your calculated texture coordinates from [-1, 1] to [0, 1]. If you don't do this, the texture will be fine, but only a quarter of your coordinates will fall in the range you actually want to sample from.

You don't want to bias the objects to be rendered to the depth texture, as OpenGL will transform the coordinates from NDC to window coordinates (the window being your texture in this case, use glViewport for the correct transformation) for you.

To apply the bias to your texture coordinates you can use a texture bias matrix, and multiply it by your projection matrix, so the shaders don't have to worry about it. The post you linked already gave that matrix:

const GLdouble bias[16] = {
  0.5, 0.0, 0.0, 0.0,
  0.0, 0.5, 0.0, 0.0,
  0.0, 0.0, 0.5, 0.0,
  0.5, 0.5, 0.5, 1.0};

Provided your matrices are column major this matrix should transform [-1, 1] to [0, 1], it will first multiply by 0.5 and then add 0.5. If your matrices are row major you should simply transpose the matrix and you're good to go.

Hope this helped.

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top