Question

I have a large dynamically allocated array (C++, MSVC110), and I am initializing it like this:

try {
    size_t arrayLength = 1 << 28;
    data = new int[arrayLength];
    for (size_t i = 0; i < arrayLength; ++i) {
        data[i] = rand();
    }
}
catch (std::bad_alloc&) { /* Report error. */ }

Everything was fine before I tried to allocate more than system's actual RAM, like 10 GB. I was expecting to catch a bad_alloc exception, but the system (Win7) started to swap as crazy, etc.. you know what I am talking about.

Then I examined the situation in my task manager and noticed interesting thing, in debug mode the allocation was instant, but in release, it was gradual.

Debug mode:

Debug mode allocation graph

Release mode:

Release mode allocation graph

What is causing it? Could this have any negative impact on performance? Have I done something wrong? Is OS causing this? Or C++ allocator?

I would actually prefer to get an exception if there is not enough memory rather than go to endless swapping loop. Is there any way how to achieve it in C++?

I know that one solution might be turn off swapping in Windows, but that would solve the problem only for me.

Was it helpful?

Solution

I think the memory allocator is doing some chaining in debug mode to allow better detection of memory handling errors. It will access every allocated block to write a few bytes in each, thus forcing the system to commit all pages allocated quickly.

In release mode, it is your code that does the linear filling of the block, thus commiting one page at a time.

As for limiting the amount of memory, well you have system calls to let you know about available resources. These, for instance, in Windows environment.

Having a system call fail if memory sawp is required would make no sense, since the amount of available memory changes constantly due to circumstances a given program cannot control (like other applications being started).

There are possibilities to make some memory blocks non-swappable (i.e. locked in RAM), but that kind of usage is usually limited to system layers like drivers.

It is up to you to detect available memory and enforce an allocation limit.

Note that it is a dangerous game; since you are usually not running alone on the computer, and there is no telling if another application will be launched later and consume more memory.

If swap is a killer for your application, you should consider taking safety margins (i.e. try to leave something like 500Mb or 1 Gb of RAM available to the system)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top