To begin with, GL_RED_INTEGER
is wrong for the internal format. I would use GL_R32I
(32-bit signed integer) instead, you could also use GL_R8I
or GL_R16I
depending on your actual storage requirements - smaller types are generally better. Also, do not use a sampler1D
for an integer texture, use isampler1D
.
Since OpenGL ES does not support data type conversion when you use a pixel transfer function (e.g. glTexture2D (...)
), you can usually find the optimal combination of format, internal format and type in a table if you look through the OpenGL ES docs.
You cannot use integer textures in OpenGL 2.1, if we are back to the same problem you were having yesterday with UBOs. If you cannot get a core profile context on OS X, you are limited to GLSL 1.2 (OpenGL 2.1). The constants for
GL_R32I
, GL_RED_INTEGER
, etc. will be defined, but they should create GL_INVALID_ENUM
errors at runtime in OS X's OpenGL 2.1 implementation.
That is not to say you cannot pack an integer value into a standard fixed-point texture and get an integer value back out in a shader. If you use an 8-bit per component format (e.g. GL_R8
) you can store values in the range 0 - 255. In your shader, after you do a texture lookup (I would use GL_NEAREST
for the texture filter, filtering will really mess things up) you can multiply the floating-point result by 255.0 and cast to int
. It is far from perfect, but we got along fine without integer textures for many years.
Here is a modification to your shader that does exactly that:
#version 120
uniform sampler1D myTexture;
int dat = (int)(texture1D (myTexture, (idx + 0.5)/(float)textureWidth).r * 255.0);
if (dat == 0) { // not always true!
Assumes GL_R8
for the internal format, use 65535.0
for GL_R16