문제

I'm creating a class that takes an OpenGL scenegraph and uses QGLFrameBufferObject to render the result. To support (virtually) infinite sizes I'm using tiling to extract many small images that can be combined into a big image after rendering all tiles.

I do tiling by setting up a viewport (glViewport) for the entire image and then using glScissor to "cut out" tile after tile. This works fine for resolutions up to GL_MAX_VIEWPORT_DIMS, but will result in empty tiles outside this limit.

How should I approach this problem? Do I need to alter the camera or is there any neat tricks to do this? I'm using Coin/OpenInventor so any tips specific to these frameworks are very welcome too.

도움이 되었습니까?

해결책

Changing the camera isn't as hard as you may think, and it's the only solution I can see at all apart from modifying vertex shaders.

By scaling and translating the projection matrix along the x and y axes, you can easily get any subregion of the normal camera's view.

For a given max and min of the viewport, where the full viewport is (-1, -1) and (1, 1), translate by (max + min) / 2, and scale by (max - min) / 2.

다른 팁

You could try scaling the entire world down, indirectly making the viewport max account for larger detail. Or basically, you could scale the image AND the viewport down and have the same visual effect.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top