Domanda

I'm working in a Windows C++ application to work with point clouds. We use the PCL library along with Qt and OpenSceneGraph. The computer has 4 GB of RAM.

If we load a lot of points (for example, 40 point clouds have around 800 million points in total) the system goes crazy.

The app is almost unresponsive (it takes ages to move the mouse around it and the arrow changes to a circle that keeps spinning) and in the task manager, in the Performance tab, I got this output:

Memory (1 in the picture): goes up to 3,97 GB, almost the total of the system.

Free (2 in the picture): 0

Windos task admin

I have checked this posts: here and here and with the MEMORYSTATUSEX version, I got the memory info.

The idea here is, before loading more clouds, check the memory available. If the "weight" of the cloud that we're gonna load is bigger than the available memory don't load it, so the app won't freeze and the user has the chance to remove older clouds to free some memory. It's worth to note that no exceptions are thrown, the worst scenario I got was that Windows killed the app itself, when the memory was insufficient.

Now, is this a good idea? Is there a canonical way to deal with this thing?

I would be glad to hear your thoughts on this matter.

È stato utile?

Soluzione

Your view is from a different direction from the usual approach to similar problems.

Normally, one would probably allocate then attempt to lock in physical memory the space they needed. (mlock() in POSIX, VirtualLock() in WinAPI). The reasoning is that even if the system has enough available physical memory at the moment, some other process could spawn the next moment and push part of your resident set into swap.

This will require you to use a custom allocator as well as ensure that your process has permission to lock down the required number of pages.

Read here for a start on this: http://msdn.microsoft.com/en-us/library/windows/desktop/aa366895(v=vs.85).aspx

Altri suggerimenti

You are also likely running into memory issues with your graphics card even once the points are loaded. You should probably monitor that as well. Once your loaded points clouds exceed your dedicated graphics card memory (which they almost certainly are in this case) the rendering slows to a crawl.

800 million is also an immense amount of points. With a minimum 3 floats per point (assuming no colorization) you are talking about 9.6GB of points so you are swapping like crazy.

I generally start voxeling to reduce memory usage once I get beyond 30-40 million points.

This is more complicated than you might imagine. The available memory shown in the system display is physical memory. The amount of memory available to your application is virtual memory.

The physical memory is shared by all processes on the computer. If you have something else running at the same time.

-=-=-=--=-=

I suspect that the problem you are seeing is processing. Using half the memory on an 4GB system should be no big deal.

If you are doing lengthy calculations do you give the system a chance to process accumulated events?

That is what I suspect the real problem is.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top