Question

Scene graphs give you the possibility to reason about their position -> if one node doesn't need to be rendered, the children of that node also don't need to be rendered.

But it seems that it might not be the best approach if the objects in the scene are constantly changing position, so you always have to update your scene graph.

I was wondering if there is a completely different approach of reducing scene complexity?

Was it helpful?

Solution

if one node doesn't need to be rendered, the children of that node also don't need to be rendered.

This is not true. Think about a parent node being only slightly out of view, but a child being clearly within view.

Scenegraphs are not a tool for visible/invisible geometry determination. Scenegraphs manage the geometrical, transformational hierachies between objects. Galaxies → Stars → Planets → Moons, or such.

What you're actually think of are not scene graphs, but Bounding Volume Hierachies (BVH), which are a completely different concept. Yes you can mix BVHs with Scenegraph data, and it's usually done, but they're used differently.

But it seems that it might not be the best approach if the objects in the scene are constantly changing position, so you always have to update your scene graph BVH.

This is indeed the case. Because of that BVH structures are a topic of ongoing research, mostly focused on adaptive BVH modification, where you don't have to rebuild the whole BVH is only a subset changes. However BVHs by nature are search tree structures and for searches to be efficient you have to balance a search tree, which can be a costly operation by itself.

So there's a tradeoff to be made between the cost of rebuilding the whole tree or balancing it.

On the bright side the once so much needed perfect invisible surface discrimination, that was required in times of software rasterizers is long gone.

We now live in times in which GPUs have significant reserves for overdraw and a "worse is better" approach often yields good results. A very good approach is letting GPU and CPU collaborate in traversing a very simple, only loosely connected BVH, based on Axis Aligned Bounding Boxes (AABB) in which tree nodes overlap and visibility of a subvolume can be tested by sending the AABB boundary faces to the GPU for a dry-run render that produces no pixels, but collects statistics, how much, if any of the bounding volume would actually be drawn. This is a very popular method now and it yields very good results.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top