Question

I am using opengl ES2.0 to render 16bit greyscale data on Android. (I'm handling 16bit to 8bit dynamic range scaling in the shader, the 16bit input is mandatory because it is incoming format of the data.) The resolution is 640x512.

At the moment I have this working by pushing pixels 2 at a time into a 320x512 32bit RGBA texture. i.e:

texture2D(thData, vTextureCoord)[0] is pixel i lower byte,

texture2D(thData, vTextureCoord)[1] is pixel i upper byte,

texture2D(thData, vTextureCoord)[2] is pixel i+1 lower byte,

texture2D(thData, vTextureCoord)[3] is pixel i+1 upper byte.

I am able to reconstruct the data from this but I have reconstruct the original resolution by rendering to a 640x512 buffer and using gl_FragCoord.x and a conditional statement to determine whether to draw the pixel from channels 0,1 or 2,3. The code looks like this:

 private final String mFragmentShader =
    "precision highp float;\n" + 
    "varying vec2 vTextureCoord;\n" +
    "uniform sampler2D thData;\n" +
    "void main() {\n" +

    //This works out whether the pixel is in an odd or even column:
    "int odd = 0;\n" +
    "if (fract(gl_FragCoord.x/2.) >= 0.5) odd = 2;\n"+

    //This chooses the channels based on the column number
    "float data =((texture2D(thData, vTextureCoord)[odd]) + (texture2D(thData, vTextureCoord)[odd+1]*256.))*256.;\n" +

This would be fine in principle but for some reason I get seem to be getting some columns swapped, my guess is that it could be rounding errors causing an incorrect result from the conditional statement.

What I'd like to do is avoid the down sampling by using a 16bit full resolution texture in the first place, this seems really simple; I have just changed the glTexImage2D code from:

GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, 320, 512, 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, sBuffer);

To

GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_LUMINANCE_ALPHA, 640, 512, 0, GLES20.GL_LUMINANCE_ALPHA, GLES20.GL_UNSIGNED_BYTE, sBuffer);

However, when I do this I appear to be getting strange data in the texture. texture2D(thData, vTextureCoord)[0] appears to be correct but texture2D(thData, vTextureCoord)[1] has the same data as element 0, the upper byte of the pixel is nowhere to be seen.

Am I missing anything here?

Or can anyone suggest an alternative approach?

Was it helpful?

Solution

GL_LUMINANCE_ALPHA textures replicate the first component across all x, y, and z components. The second component is in w, as the "ALPHA" part of the enum name implies. See the definition of GL_LUMINANCE_ALPHA format from: http://www.khronos.org/opengles/sdk/docs/man/xhtml/glTexImage2D.xml

OTHER TIPS

Did you set all pixel unpack parameters? See glPixelStorei. Most likely you must set GL_UNPACK_ALIGNMENT to 1.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top