Question

I am trying to find the maximum memory that I could allocate on stack, global and heap memory in C++. I am trying this program on a Linux system with 32 GB of memory, and on my Mac with 2 GB of RAM.

/* test to determine the maximum memory that could be allocated for static, heap and stack memory  */

#include <iostream>
using namespace std;

//static/global
long double a[200000000];

int main()
{
//stack
long double b[999999999];

//heap
long double *c = new long double[3999999999];
cout << "Sizeof(long double) = " << sizeof(long double) << " bytes\n";
cout << "Allocated Global (Static) size of a = " << (double)((sizeof(a))/(double)(1024*1024*1024)) << " Gbytes \n";
cout << "Allocated Stack size of b = " << (double)((sizeof(b))/(double)(1024*1024*1024)) << " Gbytes \n";
cout << "Allocated Heap Size of c = " << (double)((3999999999 * sizeof(long double))/(double)(1024*1024*1024)) << " Gbytes \n";

delete[] c;

return 0;

}

Results (on both):

Sizeof(long double) = 16 bytes
Allocated Global (Static) size of a = 2.98023 Gbytes 
Allocated Stack size of b = 14.9012 Gbytes 
Allocated Heap Size of c = 59.6046 Gbytes

I am using GCC 4.2.1. My question is:

Why is my program running? I expected since stack got depleted (16 MB in linux, and 8 MB in Mac), the program should throw an error. I saw some of the many questions asked in this topic, but I couldn't solve my problem from the answers given there.

Was it helpful?

Solution

On some systems you can allocate any amount of memory that fits in the address space. The problems begin when you start actually using that memory.

What happens is that the OS reserves a virtual address range for the process, without mapping it to anything physical, or even checking that there's enough physical memory (including swap) to back that address range up. The mapping only happens in a page-by-page fashion, when the process tries to access newly allocated pages. This is called memory overcommitment.

Try accessing every sysconf(_SC_PAGESIZE)th byte of your huge arrays and see what happens.

OTHER TIPS

Linux overcommits, meaning that it can allow a process more memory than is available on the system, but it is not until that memory is actually used by the process that actual memory (physical main memory or swap space on disk) is allocated for the process. My guess would be that Mac OS X works in a similar way.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top