Question

I am currently working on an in house GIS app. Background images are loaded in OpenGL by breaking the image down into what I guess are termed texels and mipmapped, after which a display list is built to texture map each texel onto rectangles. This sounds pretty standard, but the issue is that currently for images that do not divide neatly into 2^n pixel x 2^m pixel texels, the remainders are thrown away. Even if I were to capture the remainders and handle them in some way, I can't imagine that the answer is to continually test for texel subdivisions that eventually result in total capture of the image on neat boundaries? Or is it?

In some cases the images that I'm loading are geotiffs, and I need every single pixel to be included on my background. I've heard that glDrawPixels is slow. I know I could test this for myself, but I have a feeling that people in this space are using textures in opengl, not keeping track of pixel dumps.

I'm new to OpenGL, and I believe that the app is limiting itself to 1.1 calls.

Was it helpful?

Solution

The standard way of handling non power of two images in OpenGL 1.1 is to store the image in a texture with dimension of the nearest larger power of two and then modifying the texture coordinates, some pseudocode:

float sMax = image.width / texture.width;
float tMax = image.height / texture.height;

glBegin(GL_QUADS);
glVertex2f(0, 0);
glTexCoord2f(sMax, 0);
glVertex2f(1, 0);
glTexCoord2f(sMax, tMax);
glVertex2f(1, 1);
glTexCoord2f(0, 0);
glVertex2f(0, tMax);
glTexCoord2f(0, 1);
glEnd();

With a higher version of OpenGL or if you're using extension you can use rectangular textures:

http://www.opengl.org/registry/specs/ARB/texture_rectangle.txt

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top