Question

I recently wrote a Phong shader in GLSL as part of a school assignment. I started with tutorials, then played around with the code until I got it working. It works perfectly fine as far as I can tell, but there's one line in particular I wrote where I don't understand why it does work.

The vertex shader:

#version 330

layout (location = 0) in vec3 Position;    // Vertex position
layout (location = 1) in vec3 Normal;      // Vertex normal

out vec3 Norm;
out vec3 Pos;
out vec3 LightDir;

uniform mat3 NormalMatrix;      // ModelView matrix without the translation component, and inverted
uniform mat4 MVP;               // ModelViewProjection Matrix
uniform mat4 ModelView;         // ModelView matrix
uniform vec3 light_pos;         // Position of the light

void main()
{
    Norm = normalize(NormalMatrix * Normal);
    Pos = Position;
    LightDir = NormalMatrix * (light_pos - Position);

    gl_Position = MVP * vec4(Position, 1.0);
}

The fragment shader:

#version 330

in vec3 Norm;
in vec3 Pos;
in vec3 LightDir;

layout (location = 0) out vec4 FragColor;

uniform mat3 NormalMatrix;
uniform mat4 ModelView;

void main()
{
    vec3 normalDirCameraCoords = normalize(Norm);
    vec3 vertexPosLocalCoords = normalize(Pos);
    vec3 lightDirCameraCoords = normalize(LightDir);

    float dist = max(length(LightDir), 1.0);

    float intensity = max(dot(normalDirCameraCoords, lightDirCameraCoords), 0.0) / pow(dist, 1.001);

    vec3 h = normalize(lightDirCameraCoords - vertexPosLocalCoords);
    float intSpec = max(dot(h, normalDirCameraCoords), 0.0);
    vec4 spec = vec4(0.9, 0.9, 0.9, 1.0) * (pow(intSpec, 100) / pow(dist, 1.2));

    FragColor = max((intensity * vec4(0.7, 0.7, 0.7, 1.0)) + spec, vec4(0.07, 0.07, 0.07, 1.0));
}

So I'm doing the method where you calculate the half vector between the light vector and the camera vector, then dot it with the normal. That's all good. However, I do two things that are strange.

  1. Normally, everything is done in eye coordinates. However, Position, which I pass from the vertex shader to the fragment shader, is in local coordinates.

  2. This is the part that baffles me. On the line vec3 h = normalize(lightDirCameraCoords - vertexPosLocalCoords); I'm subtracting the light vector in camera coordinates with the vertex position in local coordinates. This seems utterly wrong.

In short, I understand what this code is supposed to be doing, and how the half vector method of phong shading works.

But why does this code work?

EDIT: The starter code we were provided is open source, so you can download the completed project and look at it directly if you'd like. The project is for VS 2012 on Windows (you'll need to set up GLEW, GLM, and freeGLUT), and should work on GCC with no code changes (maybe a change or two to the makefile library paths).

Note that in the source files, "light_pos" is called "gem_pos", as our light source is the little gem you move around with WSADXC. Press M to get Phong with multiple lights.

Was it helpful?

Solution

The reason this works is happenstance, but it's interesting to see why it still works.

Phong shading is three techniques in one

With phong shading, we have three terms: specular, diffuse, and ambient; these three terms represent the three techniques used in phong shading.

None of these terms strictly require a vector space; you can make phong shading work in world, local, or camera spaces as long as you are consistant. Eye space is usually used for lighting, as it is easier to work with and the conversions are simple.

But what if you are at origin? Now you are multiplying by zero; it's easy to see that there's no difference between any of the vector spaces at origin. By coincidence, at origin, it doesn't matter what vector space you are in; it'll work.

vec3 h = normalize(lightDirCameraCoords - vertexPosLocalCoords);

Notice that it's basically subtracting 0; this is the only time local is used, and it's used in the one place that it can do the least damage. Since the object is at origin, all it's vertices should be at or very close to origin as well. At origin, the approximation is exact; all vector spaces converge. Very close to origin, it's very close to exact; even if we used exact reals, it'd be a very small divergence, but we don't use exact reals, we use floats, compounding the issue.

Basically, you got lucky; this wouldn't work if the object wasn't at origin. Try moving it and see!

Also, you aren't using Phong shading; you are using Blinn-Phong shading (that's the name for the replacement of reflect() with a half vector, just for reference).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top