Question

I am working on a 2d game. Imagine a XY plane and you are a character. As your character walks, the rest of the scene comes into view.

Imagine that the XY plane is quite large and there are other characters outside of your current view.

Here is my question, with opengl, if those objects aren't rendered outside of the current view, do they eat up processing time?

Also, what are some approaches to avoid having parts of the scene rendered that aren't in view. If I have a cube that is 1000 units away from my current position, I don't want that object rendered. How could I have opengl not render that.

I guess the easiest approaches is to calculate the position and then not draw that cube/object if it is too far away.

Was it helpful?

Solution

If you've set up your scene graph correctly objects outside your field of view should be culled early on in the display pipeline. It will require a box check in your code to verify that the object is invisible, so there will be some processing overhead (but not much).

If you organise your objects into a sensible hierarchy then you could cull large sections of the scene with only one box check.

OTHER TIPS

OpenGL faq on "Clipping, Culling and Visibility Testing" says this:

OpenGL provides no direct support for determining whether a given primitive will be visible in a scene for a given viewpoint. At worst, an application will need to perform these tests manually. The previous question contains information on how to do this.

Go ahead and read the rest of that link, it's all relevant.

Typically your application must perform these optimisations - OpenGL is literally just the rendering part, and doesn't perform object management or anything like that. If you pass in data for something invisible it still has to transform the relevant coordinates into view space before it can determine that it's entirely off-screen or beyond one of your clip planes.

There are several ways of culling invisible objects from the pipeline. Checking if an object is behind the camera is probably the easiest and cheapest check to perform since you can reject half your data set on average with a simple calculation per object. It's not much harder to perform the same sort of test against the actual view frustrum to reject everything that isn't at all visible.

Obviously in a complex game you won't want to have to do this for every tiny object, so it's typical to group them, either hierarchically (eg. you wouldn't render a gun if you've already determined that you're not rendering the character that holds it), spatially (eg. dividing the world up into a grid/quadtree/octree and rejecting any object that you know is within a zone that you have already determined is currently invisible), or more commonly a combination of both.

"the only winning move is not to play"

Every glVertex etc is going to be a performance hit regardless of whether it ultimately gets rendered on your screen. The only way to get around that is to not draw (i.e. cull) objects which wont ever be rendered anyways.

most common method is to have a viewing frustum tied to your camera. Couple that with an octtree or quadtree depending on whether your game is 3d/2d so you dont need to check every single game object against the frustum.

The underlying driver may do some culling behind the scenes, but you can't depend on that since it's not part of the OpenGL standard. Maybe your computer's driver does it, but maybe someone else's (who might run your game) doesn't. It's best for you do to your own culling.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top