Question

I am trying to implement a simple projective texture mapping approach by using shaders in OpenGL 3+. While there are some examples on the web I am having trouble creating a working example with shaders.

I am actually planning on using two shaders, one which does a normal scene draw, and another for projective texture mapping. I have a function for drawing a scene void ProjTextureMappingScene::renderScene(GLFWwindow *window) and I am using glUseProgram() to switch between shaders. The normal drawing works fine. However, it is unclear to me how I am supposed to render the projective texture on top of an already textured cube. Do I somehow have to use a stencil buffer or a framebuffer object(the rest of the scene should be unaffected)?

I also don't think that my projective texture mapping shaders are correct since the second time I render a cube it shows black. Further, I tried to debug by using colors and only the t component of the shader seems to be non-zero(so the cube appears green). I am overriding the texColor in the fragment shader below just for debugging purposes.

VertexShader

#version 330

uniform mat4 TexGenMat;
uniform mat4 InvViewMat;

uniform mat4 P;
uniform mat4 MV;
uniform mat4 N;

layout (location = 0) in vec3 inPosition;
//layout (location = 1) in vec2 inCoord;
layout (location = 2) in vec3 inNormal;

out vec3 vNormal, eyeVec;
out vec2 texCoord;
out vec4 projCoords;

void main()
{
    vNormal = (N * vec4(inNormal, 0.0)).xyz;

    vec4 posEye    = MV * vec4(inPosition, 1.0);
    vec4 posWorld  = InvViewMat * posEye;
    projCoords     = TexGenMat * posWorld;

    // only needed for specular component
    // currently not used
    eyeVec = -posEye.xyz;

    gl_Position = P * MV * vec4(inPosition, 1.0);
}

FragmentShader

#version 330

uniform sampler2D projMap;
uniform sampler2D gSampler;
uniform vec4 vColor;

in vec3 vNormal, lightDir, eyeVec;
//in vec2 texCoord;
in vec4 projCoords;

out vec4 outputColor;

struct DirectionalLight
{
    vec3 vColor;
    vec3 vDirection;
    float fAmbientIntensity;
};

uniform DirectionalLight sunLight;

void main (void)
{
    // supress the reverse projection
    if (projCoords.q > 0.0)
    {
        vec2 finalCoords = projCoords.st / projCoords.q;
        vec4 vTexColor = texture(gSampler, finalCoords);
        // only t has non-zero values..why?
        vTexColor = vec4(finalCoords.s, finalCoords.t, finalCoords.r, 1.0);
        //vTexColor = vec4(projCoords.s, projCoords.t, projCoords.r, 1.0);
        float fDiffuseIntensity = max(0.0, dot(normalize(vNormal), -sunLight.vDirection));
        outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
    }
}

Creation of TexGen Matrix

biasMatrix = glm::mat4(0.5f, 0, 0, 0.5f,
                  0, 0.5f, 0, 0.5f,
                  0, 0, 0.5f, 0.5f,
                  0, 0, 0, 1);

    // 4:3 perspective with 45 fov
    projectorP = glm::perspective(45.0f * zoomFactor, 4.0f / 3.0f, 0.1f, 1000.0f);
    projectorOrigin = glm::vec3(-3.0f, 3.0f, 0.0f);
    projectorTarget = glm::vec3(0.0f, 0.0f, 0.0f);
    projectorV = glm::lookAt(projectorOrigin, // projector origin
                                    projectorTarget,     // project on object at origin 
                                    glm::vec3(0.0f, 1.0f, 0.0f)   // Y axis is up
                                    );
    mModel = glm::mat4(1.0f);
...
texGenMatrix = biasMatrix * projectorP * projectorV * mModel;
invViewMatrix = glm::inverse(mModel*mModelView);

Render Cube Again

It is also unclear to me what the modelview of the cube should be? Should it use the view matrix from the slide projector(as it is now) or the normal view projector? Currently the cube is rendered black(or green if debugging) in the middle of the scene view, as it would appear from the slide projector(I made a toggle hotkey so that I can see what the slide projector "sees"). The cube also moves with the view. How do I get the projection unto the cube itself?

mModel = glm::translate(projectorV, projectorOrigin);
// bind projective texture
tTextures[2].bindTexture();
// set all uniforms
...
// bind VBO data and draw
glBindVertexArray(uiVAOSceneObjects);
glDrawArrays(GL_TRIANGLES, 6, 36);

Switch between main scene camera and slide projector camera

if (useMainCam)
{
    mCurrent   = glm::mat4(1.0f);
    mModelView = mModelView*mCurrent;
    mProjection = *pipeline->getProjectionMatrix();
}
else
{
    mModelView  = projectorV;
    mProjection = projectorP;
}
Was it helpful?

Solution

I have solved the problem. One issue I had is that I confused the matrices in the two camera systems (world and projective texture camera). Now when I set the uniforms for the projective texture mapping part I use the correct matrices for the MVP values - the same ones I use for the world scene.

glUniformMatrix4fv(iPTMProjectionLoc, 1, GL_FALSE, glm::value_ptr(*pipeline->getProjectionMatrix()));
glUniformMatrix4fv(iPTMNormalLoc, 1, GL_FALSE, glm::value_ptr(glm::transpose(glm::inverse(mCurrent))));
glUniformMatrix4fv(iPTMModelViewLoc, 1, GL_FALSE, glm::value_ptr(mCurrent));
glUniformMatrix4fv(iTexGenMatLoc, 1, GL_FALSE, glm::value_ptr(texGenMatrix));
glUniformMatrix4fv(iInvViewMatrix, 1, GL_FALSE, glm::value_ptr(invViewMatrix));

Further, the invViewMatrix is just the inverse of the view matrix not the model view (this didn't change the behaviour in my case, since the model was identity, but it is wrong). For my project I only wanted to selectively render a few objects with projective textures. To do this, for each object, I must make sure that the current shader program is the one for projective textures using glUseProgram(projectiveTextureMappingProgramID). Next, I compute the required matrices for this object:

texGenMatrix = biasMatrix * projectorP * projectorV * mModel;
invViewMatrix = glm::inverse(mView);

Coming back to the shaders, the vertex shader is correct except that I re-added the UV texture coordinates (inCoord) for the current object and stored them in texCoord.

For the fragment shader I changed the main function to clamp the projective texture so that it doesn't repeat (I couldn't get it to work with the client side GL_CLAMP_TO_EDGE) and I am also using the default object texture and UV coordinates in case the projector does not cover the whole object (I also removed lighting from the projective texture since it is not needed in my case):

void main (void)
{
    vec2 finalCoords    = projCoords.st / projCoords.q;
    vec4 vTexColor      = texture(gSampler, texCoord);
    vec4 vProjTexColor  = texture(projMap, finalCoords);
    //vec4 vProjTexColor  = textureProj(projMap, projCoords);
    float fDiffuseIntensity = max(0.0, dot(normalize(vNormal), -sunLight.vDirection));

    // supress the reverse projection
    if (projCoords.q > 0.0)
    {
        // CLAMP PROJECTIVE TEXTURE (for some reason gl_clamp did not work...)
        if(projCoords.s > 0 && projCoords.t > 0 && finalCoords.s < 1 && finalCoords.t < 1)
            //outputColor = vProjTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
            outputColor = vProjTexColor*vColor;
        else
            outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
    }
    else
    {
        outputColor = vTexColor*vColor*vec4(sunLight.vColor * (sunLight.fAmbientIntensity + fDiffuseIntensity), 1.0);
    }
}

If you are stuck and for some reason can not get the shaders to work, you can check out an example in "OpenGL 4.0 Shading Language Cookbook" (textures chapter) - I actually missed this, until I got it working by myself.

In addition to all of the above, a great help for debugging if the algorithm is working correctly was to draw the frustum (as wireframe) for the projective camera. I used a shader for frustum drawing. The fragment shader just assigns a solid color, while the vertex shader is listed below with explanations:

#version 330

// input vertex data
layout(location = 0) in vec3 vp;

uniform mat4 P;
uniform mat4 MV;
uniform mat4 invP;
uniform mat4 invMV;
void main()
{
    /*The transformed clip space position c of a
    world space vertex v is obtained by transforming 
    v with the product of the projection matrix P 
    and the modelview matrix MV

    c = P MV v

    So, if we could solve for v, then we could 
    genrerate vertex positions by plugging in clip 
    space positions. For your frustum, one line 
    would be between the clip space positions 

    (-1,-1,near) and (-1,-1,far), 

    the lower left edge of the frustum, for example.

    NB: If you would like to mix normalized device 
    coords (x,y) and eye space coords (near,far), 
    you need an additional step here. Modify your 
    clip position as follows

    c' = (c.x * c.z, c.y * c.z, c.z, c.z)

    otherwise you would need to supply both the z 
    and w for c, which might be inconvenient. Simply 
    use c' instead of c below.


    To solve for v, multiply both sides of the equation above with 

          -1       
    (P MV) 

    This gives

          -1      
    (P MV)   c = v

    This is equivalent to

      -1  -1      
    MV   P   c = v

     -1
    P   is given by

    |(r-l)/(2n)     0         0      (r+l)/(2n) |
    |     0    (t-b)/(2n)     0      (t+b)/(2n) |
    |     0         0         0         -1      |
    |     0         0   -(f-n)/(2fn) (f+n)/(2fn)|

    where l, r, t, b, n, and f are the parameters in the glFrustum() call.

    If you don't want to fool with inverting the 
    model matrix, the info you already have can be 
    used instead: the forward, right, and up 
    vectors, in addition to the eye position.

    First, go from clip space to eye space

         -1   
    e = P   c

    Next go from eye space to world space

    v = eyePos - forward*e.z + right*e.x + up*e.y

    assuming x = right, y = up, and -z = forward.
    */
    vec4 fVp = invMV * invP * vec4(vp, 1.0);
    gl_Position = P * MV * fVp;
}

The uniforms are used like this (make sure you use the right matrices):

// projector matrices
glUniformMatrix4fv(iFrustumInvProjectionLoc, 1, GL_FALSE, glm::value_ptr(glm::inverse(projectorP)));
glUniformMatrix4fv(iFrustumInvMVLoc, 1, GL_FALSE, glm::value_ptr(glm::inverse(projectorV)));
// world camera
glUniformMatrix4fv(iFrustumProjectionLoc, 1, GL_FALSE, glm::value_ptr(*pipeline->getProjectionMatrix()));
glUniformMatrix4fv(iFrustumModelViewLoc, 1, GL_FALSE, glm::value_ptr(mModelView));

To get the input vertices needed for the frustum's vertex shader you can do the following to get the coordinates (then just add them to your vertex array):

glm::vec3 ftl = glm::vec3(-1, +1, pFar); //far top left
glm::vec3 fbr = glm::vec3(+1, -1, pFar); //far bottom right
glm::vec3 fbl = glm::vec3(-1, -1, pFar); //far bottom left
glm::vec3 ftr = glm::vec3(+1, +1, pFar); //far top right
glm::vec3 ntl = glm::vec3(-1, +1, pNear); //near top left
glm::vec3 nbr = glm::vec3(+1, -1, pNear); //near bottom right
glm::vec3 nbl = glm::vec3(-1, -1, pNear); //near bottom left
glm::vec3 ntr = glm::vec3(+1, +1, pNear); //near top right

glm::vec3   frustum_coords[36] = {
    // near
    ntl, nbl, ntr, // 1 triangle
    ntr, nbl, nbr,
    // right
    nbr, ftr, ntr,
    ftr, nbr, fbr,
    // left
    nbl, ftl, ntl,
    ftl, nbl, fbl,
    // far
    ftl, fbl, fbr,
    fbr, ftr, ftl,
    //bottom
    nbl, fbr, fbl,
    fbr, nbl, nbr,
    //top
    ntl, ftr, ftl,
    ftr, ntl, ntr
};

After all is said and done, it's nice to see how it looks:

texture projection example image

As you can see I applied two projective textures, one of a biohazard image on Blender's Suzanne monkey head, and a smiley texture on the floor and a small cube. You can also see that the cube is partly covered by the projective texture, while the rest of it appears with its default texture. Finally, you can see the green frustum wireframe for the projector camera - and everything looks correct.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top