Question

I'm implementing a target spotlight. I have the light cone, fall-off and all of that down and working OK. The problem is that as I rotate the camera around some point in space, the lighting seems to following it, i.e. regardless of where the camera is the light is always at the same angle relative to the camera.

Here's what I'm doing in my vertex shader:

void main()
{
    // Compute vertex normal in eye space.

    attrib_Fragment_Normal = (Model_ViewModelSpaceInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;

    // Compute position in eye space.

    vec4 position = Model_ViewModelSpace * vec4(attrib_Position, 1.0);

    // Compute vector between light and vertex.

    attrib_Fragment_Light = Light_Position - position.xyz;

    // Compute spot-light cone direction vector.

    attrib_Fragment_Light_Direction = normalize(Light_LookAt - Light_Position);

    // Compute vector from eye to vertex.

    attrib_Fragment_Eye = -position.xyz;

    // Output texture coord.

    attrib_Fragment_Texture = attrib_Texture;

    // Return position.

    gl_Position = Camera_Projection * position;
}

I have a target spotlight defined by Light_Position and Light_LookAt (look-at being the point in space the spotlight is looking at of course). Both position and lookAt are already in eye space. I computed eye space CPU-side by subtracting the camera position from them both.

In the vertex shader I then go on to make a light-cone vector from the light position to the light lookAt point, which informs the pixel shader where the main axis of the light cone is.

At this point I'm wondering if I have to transform the vector as well and if so by what? I've tried the inverse transpose of the view matrix, with no luck.

Can anyone take me through this?

Here's the pixel shader for completeness:

void main(void)
{   
    // Compute N dot L.

    vec3 N = normalize(attrib_Fragment_Normal);
    vec3 L = normalize(attrib_Fragment_Light);  
    vec3 E = normalize(attrib_Fragment_Eye);
    vec3 H = normalize(L + E);

    float NdotL = clamp(dot(L,N), 0.0, 1.0);
    float NdotH = clamp(dot(N,H), 0.0, 1.0);

    // Compute ambient term.

    vec4 ambient = Material_Ambient_Colour * Light_Ambient_Colour;

    // Diffuse.

    vec4 diffuse = texture2D(Map_Diffuse, attrib_Fragment_Texture) * Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL;

    // Specular.

    float specularIntensity = pow(NdotH, Material_Shininess) * Material_Strength;

    vec4 specular = Light_Specular_Colour * Material_Specular_Colour * specularIntensity;

    // Light attenuation (so we don't have to use 1 - x, we step between Max and Min).

    float d = length(-attrib_Fragment_Light);

    float attenuation = smoothstep( Light_Attenuation_Max, 
                                    Light_Attenuation_Min, 
                                    d);

    // Adjust attenuation based on light cone.

    vec3 S = normalize(attrib_Fragment_Light_Direction);

    float LdotS = dot(-L, S);
    float CosI = Light_Cone_Min - Light_Cone_Max;

    attenuation *= clamp((LdotS - Light_Cone_Max) / CosI, 0.0, 1.0);

    // Final colour.

    Out_Colour = (ambient + diffuse + specular) * Light_Intensity * attenuation;    
}

Thanks for the responses below. I still can't work this out. I'm now transforming the light into eye-space CPU-side. So no transforms of the light should be necessary, but it still doesn't work.

// Compute eye-space light position.

Math::Vector3d eyeSpacePosition = MyCamera->ViewMatrix() * MyLightPosition;

MyShaderVariables->Set(MyLightPositionIndex, eyeSpacePosition);



// Compute eye-space light direction vector.

Math::Vector3d eyeSpaceDirection = Math::Unit(MyLightLookAt - MyLightPosition);

MyCamera->ViewMatrixInverseTranspose().TransformNormal(eyeSpaceDirection);

MyShaderVariables->Set(MyLightDirectionIndex, eyeSpaceDirection);

... and in the vertex shader, I'm doing this (below). As far as I can see, light is in eye space, vertex is transformed into eye space, lighting vector (attrib_Fragment_Light) is in eye space. Yet the vector never changes. Forgive me for being a bit thick!

// Transform normal from model space, through world space and into eye space (world * view * normal = eye).

attrib_Fragment_Normal = (Model_WorldViewInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;

// Transform vertex into eye space (world * view * vertex = eye)

vec4 position = Model_WorldView * vec4(attrib_Position, 1.0);

// Compute vector from eye space vertex to light (which has already been put into eye space).

attrib_Fragment_Light = Light_Position - position.xyz;

// Compute vector from the vertex to the eye (which is now at the origin).

attrib_Fragment_Eye = -position.xyz;

// Output texture coord.

attrib_Fragment_Texture = attrib_Texture;
Était-ce utile?

La solution

It looks here like you're subtracting Light_Position, which I assume you want to be a world space coordinate (since you seem dismayed that it's currently in eye space), from position, which is an eye space vector.

// Compute vector between light and vertex.
attrib_Fragment_Light = Light_Position - position.xyz;

If you want to subtract two vectors, they must both be in the same coordinate space. If you want to do your lighting computations in world space, then you should use a world space position vector, not a view space position vector.

That means multiplying the attrib_Position variable with the Model matrix, not the ModelView matrix, and using this vector as the basis for your light computation.

Autres conseils

You can't compute eye position by just subtracting the camera position, you have to multiply by the modelview matrix.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top