Question

I'm trying to pass an array of ints into the fragment shader by using a 1D texture. Although the code compiles and runs, when I look at the texture values in the shader, they are all zero!

This is the C++ code I have after following a number of tutorials:

GLuint texture;
glGenTextures(1, &texture);
glActiveTexture(GL_TEXTURE0 + 5); // use the 5th since first 4 may be taken
glBindTexture  (GL_TEXTURE_1D, texture);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage1D(GL_TEXTURE_1D, 0, GL_RED_INTEGER, myVec.size(), 0, 
                               GL_RED_INTEGER, GL_INT, &myVec[0]);

GLint textureLoc =  glGetUniformLocation( program, "myTexture" );
glUniform1i(textureLoc, 5); 

And this how I try and access the texture in the shader:

uniform sampler1D myTexture; 
int dat = int(texture1D(myTexture, 0.0).r); // 0.0 is just an example 
if (dat == 0) { // always true!

I'm sure this is some trivial error on my part, but I just can't figure it out. I'm unfortunately constrained to using GLSL 1.20, so this syntax may seem outdated to some.

So why are the texture values in the shader always zero?

EDIT:

If I replace the int's with floats, I still have a problem:

std::vector <float> temp; 
// fill temp...
glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage1D(GL_TEXTURE_1D, 0, GL_R32F, temp.size(), 0, GL_R32F, GL_FLOAT, &temp[0]);
// ...
glUniform1f(textureLoc, 5);

This time, just reading from the sampler seems to mess up the other textures..

Was it helpful?

Solution

To begin with, GL_RED_INTEGER is wrong for the internal format. I would use GL_R32I (32-bit signed integer) instead, you could also use GL_R8I or GL_R16I depending on your actual storage requirements - smaller types are generally better. Also, do not use a sampler1D for an integer texture, use isampler1D.

Since OpenGL ES does not support data type conversion when you use a pixel transfer function (e.g. glTexture2D (...)), you can usually find the optimal combination of format, internal format and type in a table if you look through the OpenGL ES docs.


You cannot use integer textures in OpenGL 2.1, if we are back to the same problem you were having yesterday with UBOs. If you cannot get a core profile context on OS X, you are limited to GLSL 1.2 (OpenGL 2.1). The constants for GL_R32I, GL_RED_INTEGER, etc. will be defined, but they should create GL_INVALID_ENUM errors at runtime in OS X's OpenGL 2.1 implementation.

That is not to say you cannot pack an integer value into a standard fixed-point texture and get an integer value back out in a shader. If you use an 8-bit per component format (e.g. GL_R8) you can store values in the range 0 - 255. In your shader, after you do a texture lookup (I would use GL_NEAREST for the texture filter, filtering will really mess things up) you can multiply the floating-point result by 255.0 and cast to int. It is far from perfect, but we got along fine without integer textures for many years.

Here is a modification to your shader that does exactly that:

#version 120

uniform sampler1D myTexture;
int dat = (int)(texture1D (myTexture, (idx + 0.5)/(float)textureWidth).r * 255.0);
if (dat == 0) { // not always true!

Assumes GL_R8 for the internal format, use 65535.0 for GL_R16

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top