Question

I'm trying to create OpenGL bindings for Node. Because of the sheer size of the OpenGL API, doing this manually is impractical, so I turned to Khronos' OpenGL registry.

The files that are provided are easy enough to parse, but there seems to be an important piece missing, and that's how to compute the size of non-trivial parameter buffers.

Here's an example of a function definition that needs such an output buffer. Notice the COMPSIZE() expression:

GetTextureImageEXT(texture, target, level, format, type, pixels)
    return      void
    param       texture     Texture in value
    param       target      TextureTarget in value
    param       level       CheckedInt32 in value
    param       format      PixelFormat in value
    param       type        PixelType in value
    param       pixels      Void out array [COMPSIZE(target/level/format/type)]
    category    EXT_direct_state_access
    dlflags     notlistable
    glxflags    ignore ### client-handcode server-handcode
    extension   soft WINSOFT
    glfflags    capture-execute capture-handcode decode-handcode pixel-pack

This example illustrates the problem well. It seems clear that the "pixels" parameter needs an output buffer whose size depends on the target, level, format and type parameters. But how or where can I find the actual formula to compute that size?

The only piece of related information I could find online was a C source file called compsize.c that apparently belongs to the Apple implementation of OpenGL.

Can anyone help me find the hard data on this?

Was it helpful?

Solution

I'm going to answer my own question here.

It is my conclusion that what I was trying to achieve just doesn't fit the way OpenGL is meant to work; the computation I wanted to automate simply depends on too many factors to be done that way.

Just to clarify: what I intend to do is providing OpenGL bindings for Node (the command-line JavaScript interpreter based on Google's V8 virtual machine). As some OpenGL (or extension) calls return significant amounts of data and/or that data can have a non-trivial memory layout, I was hoping to help programmers by allocating the required output buffers through auto-generated code based on Khronos' parseable specification files, the same that are already used by projects such as GLEW to generate their extended C bindings.

Based on the answers I received (thank you all!), that idea was naive - and the intent not as helpful as I thought, because, thinking about it, simply having a buffer of the right size might avoid access violations, but does not help a programmer use the information he obtained. In the end, he still needs to know the exact memory layout anyway, so allocating the buffer won't be an issue for him (well, in theory at least).

In light of all this, computing output buffer sizes, as well as handling their content, is better left to the next-higher software layer. And because no-one to my knowledge uses the whole of the OpenGL API in the same piece of software, there is no actual need for a single library to handle all possible output buffer allocations.

Thanks to all who responded!

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top