Domanda

I'm running the following c++ code on Ubuntu with 4GBs of RAM

const long long  nSize = 400000000;
double Array1[nSize];
for(int i=0; i<nSize; i++)
    Array1[i]= 2*2; // store on the stack

And this fits on RAM (and my computer doesn't complain). Confusingly... htop says barely any additional RAM is being used during runtime... Why?. (I usually let it sleep for 100 seconds in case it needs to update)

On the other hand, if I dynamically allocate the huge array (as tutorials like this recommend I should do) -- htop tells me it's using up most of the RAM (if not all of it and it crashes):

double *pnArray2 = new double[nSize];
for(int i=0; i<nSize; i++)
    pnArray2[i] = 2*2; // store on the heap

So why should I use the heap to store big data structures... if (like in this example) the stack can handle even bigger arrays?

I thought the heap was supposed to be bigger than the stack! Please tell me where I'm going so wrong.

È stato utile?

Soluzione

It's probably just the optimizer doing its job (or not, in the second case). It's probably harder for it to optimize the second allocation away since you could theoretically access that memory outside the scope of the pointer.

I managed to repro this in MSVS 2010, on Release mode, and adding a simple

std::cout << Array1[42];

brought the memory usage to the same value. (granted, I did use a lower value)

There's also no code generated for the first snipped, but there is for the second one.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top