Question

I’m trying to make a minimalist OpenGL program to run on both my Intel chipset (Mesa) and NVIDIA card through Bumblebee (Optimus).

My source code (using FreeGLUT):

#include <GL/freeglut.h>
 
void display(void);
void resized(int w, int h);
 
int main(int argc, char** argv)
{
        glutInit(&argc, argv);
        glutInitDisplayMode(GLUT_RGBA | GLUT_SINGLE);
        glutInitContextVersion(2, 1);
        glutInitContextProfile(GLUT_CORE_PROFILE);
        glutInitWindowSize(640, 480);
        glutCreateWindow("Hello, triangle!");
       
        glutReshapeFunc(resized);
        glutDisplayFunc(display);
       
        glClearColor(0.3, 0.3, 0.3, 1.0);
       
        glutMainLoop();
        return 0;
}
 
void display(void)
{
        glClear(GL_COLOR_BUFFER_BIT);
       
        glColor3f(1.0, 1.0, 1.0);
        glBegin(GL_TRIANGLES);
                glVertex3f(0, 0.75, 0.0);
                glVertex3f(-0.75, -0.75, 0.0);
                glVertex3f(0.75, -0.75, 0.0);
        glEnd();
       
        glFlush();
}
 
void resized(int w, int h)
{
        glViewport(0, 0, w, h);
        glutPostRedisplay();
}

When I launch directly the program (./a.out) on the Intel chipset, everything works. I don’t have that chance with primusrun ./a.out which displays a transparent window:

It is not really transparent, the image behind stays even if I move the window.

What's interesting is that when I change for a double color buffer (using GLUT_DOUBLE instead of GLUT_SINGLE, and glutSwapBuffers() instead of glFush()) this works both on Intel and primusrun.

Here's my glxinfo: http://pastebin.com/9DADif6X
and my primusrun glxinfo: http://pastebin.com/YCHJuWAA

Am I doing it wrong or is it a Bumblebee-related bug?

Was it helpful?

Solution

The window is probably not really transparent, it probably just shows whatever was beneath it when it showed up; try moving it around and watch if it "drags" along the picture.

When using a compositor, single buffered windows are a bit tricky, because there's no cue for the compositor to know, when the program is done rendering. Using a double buffered window performing a buffer swap does give the compositor that additional information.

In addition to that, to finish a single buffered drawing you call glFinish not glFlush; glFinish also acts as a cue that drawing has been, well, finished.

Note that there's little use for single buffered drawing these days. The only argument against double buffering was lack of available graphics memory. In times where GPUs have several hundreds of megabytes of RAM available this is no longer a grave argument.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top