Question

I've been playing around with OpenGL for a full week or equivalent. After 2D I'm now trying 3D. I want to reproduce the 3D scene you can see in the third video on http://johnnylee.net/projects/wii/.
I've had a hard time making everything work properly with textures and depth.

I've had recently 2 problems that have somewhat visually the same impact :

  • One with textures that do not blend well in 3D using the techniques I've found for 2D.
  • One with objects appearing bottom above top. Like the problem exposed here: Depth Buffer in OpenGL

I've solved both problem, but I would like to know if I get things right, especially for the second point.


For the first one, I think I've got it. I have an image of a round target, with alpha for anything outside the disc. It's loaded fine inside OpenGL. Some (due to my z-ordering problem) other targets behind it suffered from being hidden by the transparent regions of the naturally square quad I used to paint it.

The reason for that was that every part of the texture is assumed to be full opaque with regard for the depth buffer. Using an glEnable(GL_ALPHA_TEST) with an test for glAlphaFunc(GL_GREATER, 0.5f) makes the alpha layer of the texture act as a per pixel (boolean) opacity indicator, and thus makes the blending quite useless (because my image has boolean transparency).

Supplementary question: By the way, is there a mean of specifying a different source for the alpha test than the alpha layer used for blending?


Second, I've found a fix to my problem. Before clearing the color and depth buffer I've set the default depth to 0 glClearDepth(0.0f) and and I've make use of "greater" depth function glDepthFunc(GL_GREATER).

What looks strange to me is that depth is 1.0 and the depth function is "less" GL_LESS by default. I'm basically inverting that so that my objects don't get displayed inverted...

I've seen nowhere such a hack, but in the other hand I've seen nowhere objects getting drawn systematically in the wrong order, regardless of which order I draw them!


OK, here's the bit of code (stripped down, not too much I hope) that is now working as I want:

    int main(int argc, char** argv) {
        glutInit(&argc, argv);
        glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH);
        glutInitWindowSize(600, 600); // Size of the OpenGL window
        glutCreateWindow("OpenGL - 3D Test"); // Creates OpenGL Window
        glutDisplayFunc(display);
        glutReshapeFunc(reshape);

        PngImage* pi = new PngImage(); // custom class that reads well PNG with transparency
        pi->read_from_file("target.png");
        GLuint texs[1];
        glGenTextures(1, texs);
        target_texture = texs[0];
        glBindTexture(GL_TEXTURE_2D, target_texture);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexImage2D(GL_TEXTURE_2D, 0, pi->getGLInternalFormat(), pi->getWidth(), pi->getHeight(), 0, pi->getGLFormat(), GL_UNSIGNED_BYTE, pi->getTexels());

        glutMainLoop(); // never returns!
        return 0;
    }

    void reshape(int w, int h) {
        glViewport(0, 0, (GLsizei) w, (GLsizei) h);
        glMatrixMode(GL_PROJECTION);
        glLoadIdentity();
        gluOrtho2D(-1, 1, -1, 1);
        gluPerspective(45.0, w/(GLdouble)h, 0.5, 10.0);
        glMatrixMode(GL_MODELVIEW);
        glLoadIdentity();
    }

    void display(void) {
        // The stared *** lines in this function make the (ugly?) fix for my second problem
        glClearColor(0, 0, 0, 1.00);
        glClearDepth(0);          // ***
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
        glShadeModel(GL_SMOOTH);
        glEnable(GL_DEPTH_TEST);
        glEnable(GL_DEPTH_FUNC);  // ***
        glDepthFunc(GL_GREATER);  // ***

        draw_scene();

        glutSwapBuffers();
        glutPostRedisplay();
    }

    void draw_scene() {
        glMatrixMode(GL_MODELVIEW);
        glLoadIdentity();
        gluLookAt(1.5, 0, -3, 0, 0, 1, 0, 1, 0);

        glColor4f(1.0, 1.0, 1.0, 1.0);
        glEnable(GL_TEXTURE_2D);
        // The following 2 lines fix the first problem
        glEnable(GL_ALPHA_TEST);       // makes highly transparent parts
        glAlphaFunc(GL_GREATER, 0.2f); // as not existent/not drawn
        glBindTexture(GL_TEXTURE_2D, target_texture);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        // Drawing a textured target
        float x = 0, y = 0, z = 0, size = 0.2;
        glBegin(GL_QUADS);
        glTexCoord2f(0.0f, 0.0f);
        glVertex3f(x-size, y-size, z);
        glTexCoord2f(1.0f, 0.0f);
        glVertex3f(x+size, y-size, z);
        glTexCoord2f(1.0f, 1.0f);
        glVertex3f(x+size, y+size, z);
        glTexCoord2f(0.0f, 1.0f);
        glVertex3f(x-size, y+size, z);
        glEnd();
        // Drawing an textured target behind the other (but drawn after)
        float x = 0, y = 0, z = 2, size = 0.2;
        glBegin(GL_QUADS);
        glTexCoord2f(0.0f, 0.0f);
        glVertex3f(x-size, y-size, z);
        glTexCoord2f(1.0f, 0.0f);
        glVertex3f(x+size, y-size, z);
        glTexCoord2f(1.0f, 1.0f);
        glVertex3f(x+size, y+size, z);
        glTexCoord2f(0.0f, 1.0f);
        glVertex3f(x-size, y+size, z);
        glEnd();
    }
Was it helpful?

Solution

Normally the depth clear value is 1 (effectively infinity) and the depth pass function is LESS because you want to simulate the real world where you see things that are in front of the things behind them. By clearing the depth buffer to 1, you are essentially saying that all objects closer than the maximum depth should be drawn. Changing those parameters is generally not something you would want to do unless you really understand what you are doing.

With the camera parameters you are passing to gluLookAt and the positions of your objects, the z=2 quad will be further from the camera than the z=0 object. What are you trying to accomplish such that this doesn't seem correct?

The standard approach to achieve order-correct alpha blending is to render all opaque objects, then render all transparent objects back to front. The regular/default depth function would always be used.

Also note that you may get some weird behavior from the way you are setting up your perspective matrix. Normally you would call gluOrtho OR gluPerspective. But not both. That will multiply the two different perspective matrices together which is probably not what you want.

OTHER TIPS

Supplementary question: By the way, is there a mean of specifying a different source for the alpha test than the alpha layer used for blending?

Yes, if you use a shader you can compute yourself the alpha value of the output fragment.

Regarding the second problem: There is very probably something wrong with your modelViewProjection matrix.

I've had the same problem (and could "fix" it with your hack) wich was caused by me employing a matrix that was somehow weirdly wrong. I solved it be implementing my own matrix generation.

Formulas implemented in standard glOrtho function map zNear to -1 and zFar to 1, which are by default mapped to window coordinates as [0,1] (changeable via glDepthRange for fixed pipeline, not sure if that function is still supported). Depth test works in those terms indeed. The way around this is just assume that zNear is furthest away from projection plane, or generate matrix yourself,which would need anyway , if you want to get rid of legacy pipeline.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top