Question

Mesa 3D claims to support 32-bit floating point color channels via osmesa32. The trouble is, the floating point numbers are quantized to 8-bits! Has anyone else noticed this? Below is the short program I'm using for testing. You will see I draw a plane (taking up the whole view) with a specific floating point color, then read the color of the first pixel:

#include <stdio.h>
#include <stdlib.h>
#include "GL/osmesa.h"
#include "GL/glut.h"

#define WIDTH 100
#define HEIGHT 100

void draw()
{
GLint r, g, b, a;
glGetIntegerv(GL_RED_BITS, &r);
glGetIntegerv(GL_GREEN_BITS, &g);
glGetIntegerv(GL_BLUE_BITS, &b);
glGetIntegerv(GL_ALPHA_BITS, &a);
printf("channel sizes: %d %d %d %d\n", r, g, b, a);

glEnable(GL_DEPTH_TEST);


glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

glTranslatef(0.0, 0.0, -1.0); 

// draw a plane
glBegin(GL_QUADS);
glColor3f(0.5f, 0.56789f, 0.568f);
glVertex3f(-1, -1, 0);
glVertex3f(-1, 1, 0);
glVertex3f(1, 1, 0);
glVertex3f(1, -1, 0);
glEnd();

glFinish();
}

int main( int argc, char *argv[] )
{
GLfloat *buffer;

/* Create an RGBA-mode context */
#if OSMESA_MAJOR_VERSION * 100 + OSMESA_MINOR_VERSION >= 305
/* specify Z, stencil, accum sizes */
OSMesaContext ctx = OSMesaCreateContextExt( GL_RGB, 16, 0, 0, NULL );
#else
OSMesaContext ctx = OSMesaCreateContext( GL_RGB, NULL );
#endif
if (!ctx) {
    printf("OSMesaCreateContext failed!\n");
    return 0;
}

/* Allocate the image buffer */
buffer = (GLfloat *) malloc( WIDTH * HEIGHT * 3 * sizeof(GLfloat));
if (!buffer) {
    printf("Alloc image buffer failed!\n");
    return 0;
}

/* Bind the buffer to the context and make it current */
if (!OSMesaMakeCurrent( ctx, buffer, GL_FLOAT, WIDTH, HEIGHT )) {
      printf("OSMesaMakeCurrent failed!\n");
      return 0;
}

draw();

printf("RED: %f\n", buffer[0]);
printf("GREEN: %f\n", buffer[1]);
printf("BLUE: %f\n", buffer[2]);

/* free the image buffer */
free( buffer );

/* destroy the context */
OSMesaDestroyContext( ctx );

return 0;
}

In the drawing code, the line:

glColor3f(0.5f, 0.56789f, 0.568f);

should give me floating color values. When I read the colors, I get the following output:

channel sizes: 32 32 32 32
RED: 0.501961
GREEN: 0.568627
BLUE: 0.568627

And you'll notice that 0.501961 = 128/255, and 0.568627=145/255 (i.e. quantized).

I built Mesa using the following configuration on my mac:

./configure --with-driver=osmesa --with-osmesa-bits=32 --disable-gallium --disable-egl
Was it helpful?

Solution

This is a compilation issue. In s_span.c you can see a conversion to GLubyte based on the value of CHAN_TYPE (defined in mtypes.h).

All comes down to CHAN_BITS == 32 or not in config.h.

I see in your post you say you're setting for 32 bits - but we are probably working with different builds - I'm getting things on Windows going with OSMesa - it looks like you might not be.


I am using 7.5.1 - seems to be the last Mesa with a VS.sln.

Setting channel bits to 32 causes OSMESA to fail. Please let me know if you find anything out.

Thanks!

OTHER TIPS

Try using a shader and vertex attributes instead of immediate mode - there's no warranty that glColor3f doesn't quantize anything it receives into 8 bit. I'm not sure such warranty exists even on "real" OpenGL - as far as I can tell, glspec41-compatibility doesn't say anything about preserving color precision, but it contains some interesting passages like " As a result of limited precision, some converted values will not be rep- resented exactly." (2.13: Fixed-Function Vertex Lighting and coloring).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top