Question

I'm developing application for an embedded system with limited memory (Tegra 2) in C++. I'm handling NULL results of new and new[] throughout the code which sometimes occurs but application is able to handle this.

The problem is that the system kills the process by SIGKILL if the memory runs out completely. Can I somehow tell that new should just return NULL instead of killing the process?

Was it helpful?

Solution

I am not sure what kind of OS You are using, but You should check if it supports opportunistic memory allocation like Linux does.

If it is enabled, the following may happen:

  1. Your new or malloc gets a valid address from the kernel. Even if there is not enough memory, because ...
  2. The kernel does not really allocate the memory until the very moment of the first access.
  3. If all of the "overcommitted" memory is used, the operating system has no chance but killing one of the involved processes. (It is too late to tell the program that there is not enough memory.) In Linux, this is called Out Of Memory Kill (OOM Kill). Such kills get logged in the kernel message buffer.

Solution: Disable overcommitting of memory: echo 2 > /proc/sys/vm/overcommit_memory

OTHER TIPS

Two ideas come to mind.

1.) Write your own memory allocation function rather than depending on new directly. You mentioned you're on an embedded system, where special allocators are quite common in applications. Are you running your application directly on the hardware or are you running in a process under an executive/OS layer? If the latter, is there a system API provided for allocating memory?

2.) Check out C++ set_new_handler and see if it can help you. You can request that a special function is invoked when a new allocation fails. Perhaps in that function you can take action to prevent whatever is killing the process from executing. Reference: http://www.cplusplus.com/reference/std/new/set_new_handler/

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top