Question

So I was studying the rendering pipeline, particularly transformations, when I came across this figure:enter image description here

I believe that the passage from camera space to homogeneous clip space happens in the T&L (or vertex shader if we're using one) but I'm not sure if the lighting calculations are made after projection or before (to be honest I think that they could be made in world coordinates already). Then to pass to the NDC we have to divide by w. Is this passege still considerated among the T&L calculation or does it belong to another stage? Finally the last transformation, from NDS to viewport/screen coordinates: this happens right after clipping and before rasterization, am I right?

Was it helpful?

Solution

Well. I don't know how broad you define the term "T&L". Looking at this from the programmable pipleine, going from object space to clip space is completely the programmer's job, and it is done in the vertex shader, as is defining any intermediate spaces like a world space or eye space.

The perspective divide, clipping, and viewport transform are still fixed function stages inbetween the vertex shader and before rasertization (as rasterization has to happen relative to the pixel raster of the framebuffer we are rendering into). Note that there are also other stages like primitive assembly inbetween.

but I'm not sure if the lighting calculations are made after projection or before (to be honest I think that they could be made in world coordinates already).

You have to distinct to things here: in which space the lighting calculation happens, and in what pipeline stage. As to the space: traditionally, lighting had been calculated in eye space. But with the programmable pipeline, you can do it in any space you can come up with. Typically, eye space or world space is used. Usually, you don't want to do it in clip space since the perspective transform will distort the angles.

As to the pipeline stage: The classic variant was Gouraud shading or "per-vertex lighting", and it was done in the vertex processing stage (hence the "&L" in "T&L" for that stage). With modern GPUs, the lighting is typically done per fragment/pixels - in the fragment/pixel shaders which are invoced after the rasterization. But still some non-distorted space like eye or world space is used for the calculation.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top