Question

I'm trying to use luminance textures on my ATI graphics card.

The problem: I'm not being able to correctly retrieve data from my GPU. Whenever I try to read it (using glReadPixels), all it gives me is an 'all-ones' array (1.0, 1.0, 1.0...).

You can test it with this code:

#include <stdio.h>
#include <stdlib.h>
#include <GL/glew.h>
#include <GL/glut.h>

static int arraySize = 64;
static int textureSize = 8;
//static GLenum textureTarget = GL_TEXTURE_2D;
//static GLenum textureFormat = GL_RGBA;
//static GLenum textureInternalFormat = GL_RGBA_FLOAT32_ATI;
static GLenum textureTarget = GL_TEXTURE_RECTANGLE_ARB;
static GLenum textureFormat = GL_LUMINANCE;
static GLenum textureInternalFormat = GL_LUMINANCE_FLOAT32_ATI;

int main(int argc, char** argv)
{
    // create test data and fill arbitrarily
    float* data = new float[arraySize];
    float* result = new float[arraySize];

    for (int i = 0; i < arraySize; i++)
    {
        data[i] = i + 1.0;
    }

    // set up glut to get valid GL context and
    // get extension entry points
    glutInit (&argc, argv);
    glutCreateWindow("TEST1");
    glewInit();

    // viewport transform for 1:1 pixel=texel=data mapping
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    gluOrtho2D(0.0, textureSize, 0.0, textureSize);
    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();
    glViewport(0, 0, textureSize, textureSize);

    // create FBO and bind it (that is, use offscreen render target)
    GLuint fboId;
    glGenFramebuffersEXT(1, &fboId);
    glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboId);

    // create texture
    GLuint textureId;
    glGenTextures (1, &textureId);
    glBindTexture(textureTarget, textureId);

    // set texture parameters
    glTexParameteri(textureTarget, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameteri(textureTarget, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    glTexParameteri(textureTarget, GL_TEXTURE_WRAP_S, GL_CLAMP);
    glTexParameteri(textureTarget, GL_TEXTURE_WRAP_T, GL_CLAMP);

    // define texture with floating point format
    glTexImage2D(textureTarget, 0, textureInternalFormat, textureSize, textureSize, 0, textureFormat, GL_FLOAT, 0);

    // attach texture
    glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, textureTarget, textureId, 0);

    // transfer data to texture
    //glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT);
    //glRasterPos2i(0, 0);
    //glDrawPixels(textureSize, textureSize, textureFormat, GL_FLOAT, data);
    glBindTexture(textureTarget, textureId);
    glTexSubImage2D(textureTarget, 0, 0, 0, textureSize, textureSize, textureFormat, GL_FLOAT, data);

    // and read back
    glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
    glReadPixels(0, 0, textureSize, textureSize, textureFormat, GL_FLOAT, result);

    // print out results
    printf("**********************\n");
    printf("Data before roundtrip:\n");
    printf("**********************\n");

    for (int i = 0; i < arraySize; i++)
    {
        printf("%f, ", data[i]);
    }
    printf("\n\n\n");

    printf("**********************\n");
    printf("Data after roundtrip:\n");
    printf("**********************\n");

    for (int i = 0; i < arraySize; i++)
    {
        printf("%f, ", result[i]);
    }
    printf("\n");

    // clean up
    delete[] data;
    delete[] result;

    glDeleteFramebuffersEXT (1, &fboId);
    glDeleteTextures (1, &textureId);

    system("pause");

    return 0;
}

I also read somewhere on the internet that ATI cards don't support luminance yet. Does anyone know if this is true?

Was it helpful?

Solution 2

here's what I found out:

1) If you use GL_LUMINANCE as texture format (and GL_LUMINANCE_FLOAT32_ATI GL_LUMINANCE32F_ARB or GL_RGBA_FLOAT32_ATI as internal format), the glClampColor(..) (or glClampColorARB(..)) doesn't seem to work at all. I was only able to see the values getting actively clamped/not clamped if I set the texture format to GL_RGBA. I don't understand why this happens, since the only glClampColor(..) limitation I heard of is that it works exclusively with floating-point buffers, which all chosen internal formats seems to be.

2) If you use GL_LUMINANCE (again, with GL_LUMINANCE_FLOAT32_ATI, GL_LUMINANCE32F_ARB or GL_RGBA_FLOAT32_ATI as internal format), it looks like you must "correct" your output buffer dividing each of its elements by 3. I guess this happens because when you use glTexImage2D(..) with GL_LUMINANCE it internally replicates each array component three times and when you read GL_LUMINANCE values with glReadPixel(..) it calculates its values from the sum of the RGB components (thus, three times what you have given as input). But again, it stills give you clamped values.

3) Finally, if you use GL_RED as texture format (instead of GL_LUMINANCE), you don't need to pack your input buffer and you get your output buffer properly. The values are not clamped and you don't need to call glClampColor(..) at all.

So, I guess I'll stick with GL_RED, because in the end what I wanted was an easy way to send and collect floating-point values from my "kernels" without having to worry about offsetting array indexes or anything like this.

OTHER TIPS

This has nothing to do with luminance values; the problem is with you reading floating point values.

In order to read floating-point data back properly via glReadPixels, you first need to set the color clamping mode. Since you're obviously not using OpenGL 3.0+, you should be looking at the ARB_color_buffer_float extension. In that extension is glClampColorARB, which works pretty much like the core 3.0 verison.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top