Question

I have been reading about out of memory conditions on Linux, and the following paragraph from the man pages got me thinking:

By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more processes will be killed by the infamous OOM killer. [...]

Considering that the operator new implementation will end up calling malloc at some point, are there any guarantees that new will actually throw on Linux? If there aren't, how does one handle this apparently undetectable error situation?

Was it helpful?

Solution

It depends; you can configure the kernel's overcommit settings using vm.overcommit_memory.

Herb Sutter discussed a few years ago how this behavior is actually nonconforming to the C++ standard:

"On some operating systems, including specifically Linux, memory allocation always succeeds. Full stop. How can allocation always succeed, even when the requested memory really isn't available? The reason is that the allocation itself merely records a request for the memory; under the covers, the (physical or virtual) memory is not actually committed to the requesting process, with real backing store, until the memory is actually used.

"Note that, if new uses the operating system's facilities directly, then new will always succeed but any later innocent code like buf[100] = 'c'; can throw or fail or halt. From a Standard C++ point of view, both effects are nonconforming, because the C++ standard requires that if new can't commit enough memory it must fail (this doesn't), and that code like buf[100] = 'c' shouldn't throw an exception or otherwise fail (this might)."

OTHER TIPS

You can't handle it in your software, pure and simple.

For your application you will receive a perfectly valid pointer. Once you will try to access it, it will generate a page fault in the kernel, the kernel will try to allocate a physical page for it and if it can't ... boom.

But as you see, all this happens inside the kernel, your application cannot see that. If it's a critical system you can disable the overcommit alltogether on the system.

I think the malloc can still return NULL. The reason why is that there is a difference between the system memory available (RAM + swap) and the amount in your process's address space.

For example, if you ask for 3GB of memory from malloc on a standard x86 linux, it will surely return NULL since this is impossible given the amount of memory given to user space apps.

Forgive me if I'm wrong, but wouldn't trying to zero out the memory allocated be enough to guarantee that you have every single byte you requested? Or even just writing to the last byte, it would throw an exception if the memory wasn't really yours right?

If that's true, you could just try writing to the last (and first?) byte of the memory and see if it works fine, and if it doesn't you could return null from malloc.

Yes, there is one guarantee that new will eventually throw. Regardless of overcommit, the amount of address space is limited. So if you keep allocating memory, sooner or later you will run out of address space and new will be forced to throw.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top