No. Normalizing the color creates an interesting effect, but I think you don't really want it most, if not all of the time.
Normalization of the color output causes loss of information, even though it may seem to provide greater detail to a scene in some cases. If all your fragments have their color normalized, it means that all RGB vectors have their norm equal to 1
. This means that there are colors that simply cannot exist in your output: white (norm = sqrt(3)
), bright colors such as yellow (norm = sqrt(2)
), dark colors such as dark red (norm(0.5, 0.0, 0.0) = 0.5
), etc. Another issue you may face is normalizing zero vectors (i.e. black).
Another way to understand why color normalization is wrong, think about the less general case of rendering a grayscale image. As there is only one color component, normalization does not really make sense at all as it would make all your colors 1.0
.
The problem with using the values without normalization arises from the fact that your output image has to have its color values clamped to a fixed interval: [0, 255]
or [0.0, 1.0]
. As the specular parts of your object reflect more light than those that only reflect diffuse light, quite possibly the computed color value may excede even (1.0, 1.0, 1.0)
and get clamped to white for most of the specular area, therefore these areas become, perhaps, too bright.
A simple solution would be to lower the material constant values, or the light intensity. You could go one step further and make sure that the values for the material constants and light intensity are chosen such that the computed color value cannot excede (1.0, 1.0, 1.0)
. The same result could be achieved with a simple division of the computed color value if consistent values are used for all the materials and all the lights in the scene, but it is kind of overkill, as the scene would probably be too dark.
The more complex, but better looking solution involves HDR rendering and exposure filters such as bloom to obtain more photo-realistic images. This basically means rendering the scene into a float buffer which can handle a greater range than the [0, 255]
RGB buffer, then simulating the camera or human eye behavior of adapting to a certain light intensity and the image artefacts caused by this mechanism (i.e. bloom).