سؤال

I'm trying to output some data from compute shader to a texture, but imageStore() seems to do nothing. Here's the shader:

#version 430

layout(RGBA32F) uniform image2D image;

layout (local_size_x = 1, local_size_y = 1) in;
void main() {
    imageStore(image, ivec2(gl_GlobalInvocationID.xy), vec4(0.0f, 1.0f, 1.0f, 1.0f));
}

and the application code is here:

GLuint tex;

glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, WIDTH, HEIGHT, 0, GL_RGBA, GL_FLOAT, 0);

glBindImageTexture(0, tex, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F);
glUseProgram(program->GetName());
glUniform1i(program->GetUniformLocation("image"), 0);

glDispatchCompute(WIDTH, HEIGHT, 1);

then a full screen quad is rendered with that texture but currently it only shows some random old data from video memory. Any idea what could be wrong?

EDIT:

This is how I display the texture:

// This comes right after the previous block of code
glUseProgram(drawProgram->GetName());

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex);
glUniform1i(drawProgram->GetUniformLocation("sampler"), 0);

glBindVertexArray(vao);
glDrawArrays(GL_TRIANGLES, 0, 6);

glfwSwapBuffers();

and the drawProgram consists of:

#version 430
#extension GL_ARB_explicit_attrib_location : require

layout(location = 0) in vec2 position;
out vec2 uvCoord;

void main() {
    gl_Position = vec4(position.x, position.y, 0.0f, 1.0f);
    uvCoord = position;
}

and:

#version 430

in vec2 uvCoord;
out vec4 color;

uniform sampler2D sampler;

void main() {
    vec2 uv = (uvCoord + vec2(1.0f)) / 2.0f;
    uv.y = 1.0f - uv.y;

    color = texture(sampler, uv);
    //color = vec4(uv.x, uv.y, 0.0f, 1.0f);
}

The last commented line in fragment shader produces this output: Render output

The vertex array object (vao) has one buffer with 6 2D vertices:

-1.0, -1.0

1.0, -1.0

1.0, 1.0

1.0, 1.0

-1.0, 1.0

-1.0, -1.0

هل كانت مفيدة؟

المحلول

This is how I display the texture:

That's not good enough. I don't see a call to glMemoryBarrier, so there's no guarantee that your code actually works.

Remember: writes to images via Image Load/Store are not memory coherent. They require explicit user synchronization before they become visible. If you want to use an image you have stored to as a texture later, there must be an explicit glMemoryBarrier call after the rendering command that writes to it, but before the rendering command that samples from it as a texture.

Why that is a problem, I don't know

Because desktop OpenGL is not OpenGL ES.

The last three parameters only describe the arrangement of the pixel data you're giving OpenGL. They change nothing about how OpenGL stores the data. In ES, they do, but that's only because ES doesn't do format conversions.

In desktop OpenGL, it is perfectly legal to upload floating-point data to a normalized integer texture; OpenGL is expected to convert the data as best it can. ES doesn't do conversions, so it has to change the internal format (the third parameter) to match the data.

Desktop GL does not. If you want a specific image format, you ask for it. Desktop GL gives you what you ask for, and only what you ask for.

Always use sized internal formats.

نصائح أخرى

GL_RGBA is not a sized internal format and so you're not able to know which it is really. Most often, it is transformed to GL_RGBA8 by OpenGL.

In your case, the GL_FLOAT parameter you set only describes the pixel data you could upload in the texture.

Read the table 2 here to know what you can set as an internal texture format.

Okay I found the solution. The problem lies here:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, WIDTH, HEIGHT, 0, GL_RGBA, GL_FLOAT, 0);

this line doesn't specify the size of the internal format (GL_RGBA). When I supplied GL_RGBA32F it started working. Why that is a problem, I don't know (hopefully somebody will be able to explain).

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top