Pergunta

Every process can use heap memory to store and share data within the process. We have a rule in programming whenever we take some space in heap memory, we need to release it once job is done, else it leads to memory leaks.

int *pIntPtr = new int;
.
.
.
delete pIntPtr;

My question: Is heap memory per-process?

If YES,

then memory leak is possible only when a process is in running state.

If NO,

then it means OS is able to retain data in a memory somewhere. If so, is there a way to access this memory by another process. Also this may become a way for inter-process communication.

I suppose answer to my question is YES. Please provide your valuable feedback.

Foi útil?

Solução

On almost every system currently in use, heap memory is per-process. On older systems without protected memory, heap memory was system-wide. (In a nutshell, that's what protected memory does: it makes your heap and stack private to your process.)

So in your example code on any modern system, if the process terminates before delete pIntPtr is called, pIntPtr will still be freed (though its destructor, not that an int has one, would not be called.)

Note that protected memory is an implementation detail, not a feature of the C++ or C standards. A system is free to share memory between processes (modern systems just don't because it's a good way to get your butt handed to you by an attacker.)

Outras dicas

In most modern operating systems each process has its own heap that is accessible by that process only and is reclaimed once the process terminates - that "private" heap is usually used by new. Also there might be a global heap (look at Win32 GlobalAlloc() family functions for example) which is shared between processes, persists for the system runtime and indeed can be used for interprocess communications.

Generally the allocation of memory to a process happens at a lower level than heap management.

In other words, the heap is built within the process virtual address space given to the process by the operating system and is private to that process. When the process exits, this memory is reclaimed by the operating system.

Note that C++ does not mandate this, this is part of the execution environment in which C++ runs, so the ISO standards do not dictate this behaviour. What I'm discussing is common implementation.

In UNIX, the brk and sbrk system calls were used to allocate more memory from the operating system to expand the heap. Then, once the process finished, all this memory was given back to the OS.

The normal way to get memory which can outlive a process is with shared memory (under UNIX-type operating systems, not sure about Windows). This can result in a leak but more of system resources rather than process resources.

There are some special purpose operating systems that will not reclaim memory on process exit. If you're targeting such an OS you likely know.

Most systems will not allow you to access the memory of another process, but again...there are some unique situations where this is not true.

The C++ standard deals with this situation by not making any claim about what will happen if you fail to release memory and then exit, nor what will happen if you attempt to access memory that isn't explicitly yours to access. This is the very essence of what "undefined behavior" means and is the core of what it means for a pointer to be "invalid". There are more issues than just these two, but these two play a part.

Normally the O/S will reclaim any leaked memory when the process terminates.

For that reason I reckon it's OK for C++ programmers to never explicitly free any memory which is needed until the process exits; for example, any 'singletons' within a process are often not explicitly freed.

This behaviour may be O/S-specific, though (although it's true for e.g. both Windows and Linux): not theoretically part of the C++ standard.

For practical purposes, the answer to your question is yes. Modern operating systems will generally release memory allocated by a process when that process is shut down. However, to depend on this behavior is a very shoddy practice. Even if we can be assured that operating systems will always function this way, the code is fragile. If some function that fails to free memory suddenly gets reused for another purpose, it might translate to an application-level memory leak.

Nevertheless, the nature of this question and the example posted requires, ethically, for me to point you and your team to look at RAII.

int *pIntPtr = new int;
...
delete pIntPtr;

This code reeks of memory leaks. If anything in [...] throws, you have a memory leak. There are several solutions:

int *pIntPtr = 0;
try
{
    pIntPtr = new int;
    ...
}
catch (...)
{
    delete pIntPtr;
    throw;
}
delete pIntPtr;

Second solution using nothrow (not necessarily much better than first, but allows sensible initialization of pIntPtr at the time it is defined):

int *pIntPtr = new(nothrow) int;
if (pIntPtr)
{
    try
    {
         ...
    }
    catch (...)
    {
        delete pIntPtr;
        throw;
    }
    delete pIntPtr;
}

And the easy way:

scoped_ptr<int> pIntPtr(new int);
...

In this last and finest example, there is no need to call delete on pIntPtr as this is done automatically regardless of how we exit this block (hurray for RAII and smart pointers).

Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top