Question

I'm implementing a deferred lighting mechanism in my OpenGL graphics engine following this tutorial. It works fine, I don't get into trouble with that.

When it comes to the point lights, it says to render spheres around the lights to only pass those pixels throught the lighting shader, that might be affected by the light. There are some Issues with that method concerning cullface and camera position precisely explained here. To solve those, the tutorial uses the stencil-test.

I doubt the efficiency of that method which leads me to my first Question:

Wouldn't it be much better to draw a circle representing the light-sphere?

A sphere always looks like a circle on the screen, no matter from which perspective you're lokking at it. The task would be to determine the screenposition and -scaling of the circle. This method would have 3 advantages:

  • No cullface-issue
  • No camereposition-in-lightsphere-issue
  • Much more efficient (amount of vertices severely reduced + no stencil test)

Are there any disadvantages using this technique?

My second Question deals with implementing mentioned method. The circles' center position could be easily calculated as always:

vec4 screenpos = modelViewProjectionMatrix * vec4(pos, 1.0);
vec2 centerpoint = vec2(screenpos / screenpos.w);

But now how to calculate the scaling of the resulting circle? It should be dependent on the distance (camera to light) and somehow the perspective view.

Was it helpful?

Solution

I don't think that would work. The point of using spheres is they are used as light volumes and not just circles. We want to apply lighting to those polygons in the scene that are inside the light volume. As the scene is rendered, the depth buffer is written to. This data is used by the light volume render step to apply lighting correctly. If it were just a circle, you would have no way of knowing whether A and C should be illuminated or not, even if the circle was projected to a correct depth.

enter image description here

OTHER TIPS

I didn't read the whole thing, but i think i understand general idea of this method.

  1. Won't help much. You will still have issues if you move the camera so that the circle will be behind the near plane - in this case none of the fragments will be generated, and the light will "disappear"

  2. Lights described in the article will have a sharp falloff - understandably so, since sphere or circle will have sharp border. I wouldn-t call it point lightning...

  3. For me this looks like premature optimization... I would certainly just be rendering whole screenquad and do the shading almost as usual, with no special cases to worry about. Don't forget that all the manipulations with opengl state and additional draw operations will also introduce overhead, and it is not clear which one will outscale the other here.

  4. You forgot to do perspective division here

  5. The simplest way to calculate scaling - transform a point on the surface of sphere to screen coords, and calculate vector length. It mst be a point on the border in screen space, obviously.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top