Domanda

I am a graduate student in Mechanical Engineering. My research group has an in-house finite element code written in C++. I have noticed that a lot of the memory for vectors and arrays is allocated statically, for example:

In Element.h

// A vector to store a finite element residual vector in h file
static Real* sRe

In Element.C

if ( ! sIsResAndJacAllocated )
{
    UInt numElemDofs = this->GetNumDofs();
    // Residual storage
    sReXt         = new Real[numElemDofs*numElemDofs];
    sIsResAndJacAllocated = true;
}

In this manner, the vector is only allocated once for the first element that reaches this function, and the rest of the objects just use that memory space repeatedly.

The first person to start developing the code knew more C than C++, that's why a lot of it is written this way.

The idea behind allocating these vectors and arrays statically in the group is that it is faster to allocate these big pieces of memory just once and use it repeatedly; in comparison to having to allocate the same array multiple times for each finite element or every time the function gets called (using alloca for example). Is this true? Is there really a big speed difference?

We are trying to reach a conclusion about this in order to decide if we should keep the static memory allocation or get rid of it. I have been looking for an answer to this for a few weeks now with no luck. I hope your opinion will help reach a conclusion.

Thanks.

Hernan

È stato utile?

Soluzione

First off, when using the word "statically", that will confuse C++ developers in thinking you mean a variable with the "static" declaration. You example is not declaring the array statically; it is being created in the Heap via the "new" operator. I don't know how big these arrays are, but creating the storage once (statically, as you call it), it probably a good idea. Of course, it depends on how many times these arrays would have to be created/destroyed if done each time the function is called (your alternative). Certainly, there is going to be more overhead in continually creating/destroying objects on the heap. Whether this is a performance issue is hard to say, as we don't know what else your program does. If it spends 90% of its time doing other processing ( file i/o, lots of calculations, etc.), then maybe this memory allocation would not be a big factor one way or the other. Not knowing the OS, compiler options, etc., you could always code it both ways and run a performance benchmark.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top