Question

after having used PyOpenGL happily for some time, I'm now seriously stuck. I am working on a Python package that allows me to use GLSL shaders and OpenCL programs for image processing, using textures as the standardized way to get my data in and out of the GLSL shaders and OpenCL programs.

Everything works, except that I can not succeed in copying a texture into a pbo (pixel buffer object). I'm using pbo's to get my texture data in/out of OpenCL and that works nice and fast in PyOpenCL: I can copy my OpenCL output from its pbo to a texture and display it, and I also can load data from the cpu into a pbo. But I am hopelessly stuck trying to fill my pbo with texture data already on the GPU, which is what I need to do to load my images produced by GLSL shaders into OpenCL for further processing.

I've read about two ways to do this: variant 1 binds the pbo, binds the texture and uses glGetTexImage() variant 2 attaches the texture to a frame buffer object, binds the fbo and the pbo and uses glReadPixels()

I also read that the PyOpenGL versions of both glReadPixels() and glGetTexImage() have trouble with the 'Null'-pointers one should use when having a bound pbo, so for that reason I am using the OpenGL.raw.GL variants.

But in both these cases I get an 'Invalid Operation' error, and I really do not see what I am doing wrong. Below two versions of the load_texture() method of my pixelbuffer Python class, I hope I didn't strip them down too far...

variant 1:

    def _load_texture(self, texture):
        glBindBuffer(GL_PIXEL_PACK_BUFFER_ARB, self.id)
        glEnable(texture.target)
        glActiveTexture(GL_TEXTURE0_ARB)
        glBindTexture(texture.target, texture.id)
        OpenGL.raw.GL.glGetTexImage(texture.target, 0, texture.gl_imageformat,
                                    texture.gl_dtype, ctypes.c_void_p(0))
        glBindBuffer(GL_PIXEL_PACK_BUFFER_ARB, 0)
        glDisable(texture.target)

variant 2:

    def _load_texture(self, texture):
        fbo = FrameBufferObject.from_textures([texture])
        glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
        texture.target, texture.id, 0)
        glReadBuffer(GL_COLOR_ATTACHMENT0)
        glBindFramebuffer(GL_FRAMEBUFFER, fbo.id)
        glBindBuffer(GL_PIXEL_PACK_BUFFER, self.id)
        OpenGL.raw.GL.glReadPixels(0, 0, self.size[0], self.size[1],
                                   texture.gl_imageformat, texture.gl_dtype,  
                                   ctypes.c_void_p(0))
        glBindBuffer(GL_PIXEL_PACK_BUFFER, 0)
        glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                               GL_TEXTURE_RECTANGLE_ARB, 0, 0)
        glBindFramebuffer(GL_FRAMEBUFFER, 0)

EDIT (adding some information about the error and initialization of my pbo):

the Error I am getting for variant 1 is:

OpenGL.error.GLError: GLError(
    err = 1282,
    description = 'invalid operation',
    baseOperation = glGetTexImage,
    cArguments = (
        GL_TEXTURE_RECTANGLE_ARB,
        0,
        GL_RGBA,
        GL_UNSIGNED_BYTE,
        c_void_p(None),
    )

and i'm initializing my pbo like this:

    self.usage = usage
    if isinstance(size, tuple):
        size = size[0] * size[1] * self.imageformat.planecount
    bytesize = self.imageformat.get_bytesize_per_plane() * size
    glBindBuffer(self.arraytype, self.id)
    glBufferData(self.arraytype, bytesize, None, self.usage)
    glBindBuffer(self.arraytype, 0)

the 'self.arraytype' is GL_ARRAY_BUFFER, self.usage I have tried all possibilities just in case, but GL_STREAM_READ seemed the most logical for my type of use. the size I am typically using is 1024 by 1024, 4 planes, 1 byte per plane since it is unisgned ints. This works fine when transferring pixel data from the host.

Also I am on Kubuntu 11.10, using a NVIDIA GeForce GTX 580 with 3Gb of memory on the GPU, using the proprietary driver, version 295.33

what am I missing ?

Was it helpful?

Solution

Found a solution myself without really understanding why it makes that huge difference.

The code I had (for both variants) was basically correct but needs the call to glBufferData in there for it to work. I already made that identical call when initializing my pbo in my original code, but my guess is that there was enough going on between that initialization and my attempt to load the texture, for the pbo memory somehow to become disallocated in the meantime.

Now I only moved that call closer to my glGetTexImage call and it works without changing anything else.

Strange, I'm not sure if that is a bug or a feature, if it is related to PyOpenGL, to the NVIDIA driver or to something else. It sure is not documented anywhere easy to find if it is expected behaviour.

The variant 1 code below works and is mighty fast too, variant 2 works fine as well when treated in the same way, but at about half the speed.

def _load_texture(self, texture):
    bytesize = (self.size[0] * self.size[1] *
                self.imageformat.planecount *
                self.imageformat.get_bytesize_per_plane())
    glBindBuffer(GL_PIXEL_PACK_BUFFER_ARB, self.id)
    glBufferData(GL_PIXEL_PACK_BUFFER_ARB,
                 bytesize,
                 None, self.usage)
    glEnable(texture.target)
    glActiveTexture(GL_TEXTURE0_ARB)
    glBindTexture(texture.target, texture.id)
    OpenGL.raw.GL.glGetTexImage(texture.target, 0, texture.gl_imageformat,
                                texture.gl_dtype, ctypes.c_void_p(0))
    glBindBuffer(GL_PIXEL_PACK_BUFFER_ARB, 0)
    glDisable(texture.target)
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top