Question

Hey guys, I am rendering some old geometry from an old game. Their client had some algorithm which allowed them to see which areas were nearby but I don't have that ability so I am looking into culling the unnecessary polygons. Currently, I am rendering every single polygon in the whole zone regardless if I can see it or not, regardless if it is even in visual range. Obviously this is completely inefficient. My question:

What type of culling should I look into using. I know I can cull polygons not in the frustum and that will help alleviate some of the load but would I be able to say, choose to not render polygons that are a certain distance from the camera? What is this called? I also am using fog in some of the areas. Same question goes. Can I come up with a way where I can cull everything that is behind the fog, area I cannot see.

Thank you: a screenshot of this work in progress. Some people may recognize it :) Also, ignore the ugly colors for the leaves. I have no accounted for alpha masking.

Image: http://i.stack.imgur.com/duc2I.png

Was it helpful?

Solution

There are two different thing to consider: Do you just want to look it properly, i.e. hidden surface removal? Then simple depth testing will do the job; the overhead is, that you process geometry that doen't make it on the screen at all. However if that's a (very) old game, you took the data from, it's very likely that a full map with all it's assets has fewer polygons, than what's visible in modern games on a screenfull. In that case you'll not run in any performance problems.

If you really run into performance problems, you'll need to find a balance on how much time you want to spend, determining what's (not) visible, and actually rendering it. 10 years ago it was still crucial to be almost pixel perfect to save as much rasterizing time as possible. Modern GPUs have so much spare power, that it suffices to just do a coarse selection of what to include in rendering.

These calculations are however completely outside the scope of OpenGL or any other 3D rasterizing API (e.g. Direct3D) — their task is just drawing triangles to the screen using sophisticated rasterization methods; there's no object management, no higher level functions. So it's up to you to implement this.

The typical approach is using a spatial subdivision structure. Most popular are Kd trees, octrees and BSP trees. BSP trees are spatially very efficient, but heavier in computation. Personally I prefer a hybrid/combination of Kd tree and octree, since those are easy to modify to follow dynamic changes in the scene. BSP trees are a lot heavier to update (usually requires a full recomputation).

Given such a spatial structuring it's very easy to determine if a point lies in a specific region of interest. It is also very simple to select nodes in the tree by geometric constraints, like planes. This makes implementing a coarse frustum culling very easy: One uses the frustum clipping planes to select all the nodes from the tree within the planes. To make the GPUs life easier you then might want to sort the nodes near to far; again the tree structure helps you there, as you can recursively sort down the tree, resulting in a nearly optimal O(n log(n)) complexity.

If you still need to improve rendering performance, you could use the spatial divisions defined by the tree, to (invisibly) render testing geometry in a occlusion query, before recursing into the subtree limited by the tested bounds.

OTHER TIPS

I know I can cull polygons not in the frustum and that will help alleviate some of the load but would I be able to say, choose to not render polygons that are a certain distance from the camera? What is this called?

This is already done by the frustum it self. The far plane set a camera distance limit to the object to be rendered.

Have a look at glFrustum.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top