Question

I have a dynamic array as a member of my class. I'm trying to find an efficient way to resize it and keep all of the information in it. I know that vectors would work well for this but I want to do this with a dynamic array instead.

My class has a dynamic array of type unsigned _int8 called data.

Is the following acceptable?

unsigned _int8 * temp = data;
data = new unsigned _int8[NewSize]();

if(OldSize >= NewSize)
{
    for(int i = 0; i < NewSize; i++)
        data[i] = temp[i];
}
else
{
    for(int i = 0; i < OldSize; i++)
        data[i] = temp[i];
}

delete [] temp;

Or should I do this a different way? Any suggestions?

Edit

Fixed an error in my example and changed char to unsigned _int8.

Edit 2

I will not be reallocating often, if at all. I want the functionality to be there to avoid having to write the code to create a new object and copy everything over if it's needed.

The class I am writing is for creating and saving Bitmap (.bmp) images. The array simply holds the file bytes. The image size will (should) be known when I create the object.

Was it helpful?

Solution

Since the array is using a POD (plain old data) type, you can replace the loops with memcpy() instead:

unsigned _int8 *temp = new unsigned _int8[NewSize];

if (OldSize >= NewSize)
    memcpy(temp, data, NewSize * sizeof(unsigned _int8));
else
{
    memcpy(temp, data, OldSize);
    memset(&temp[OldSize], 0, (NewSize-OldSize) * sizeof(unsigned _int8));
}

delete[] data;
data = temp;

Or at least use std::copy() (for POD types, std::copy() is like memcpy(), but for non-POD types it uses loops so object assignment semantics are preserved):

unsigned _int8 *temp = new unsigned _int8[NewSize];

if (OldSize >= NewSize)
    std::copy(data, &data[NewSize], temp);
else
{
    std::copy(data, &data[OldSize], temp);
    std::memset(&temp[OldSize], 0, (NewSize-OldSize) * sizeof(unsigned _int8));
}

delete[] data;
data = temp;

That being said, you really should use std::vector<unsigned _int8> instead. It handles these details for you. This type of array management is what you have to use in C, but really should not use in C++ if you can avoid it, use native C++ functionality instead.

OTHER TIPS

By doing it this way, every time a new element is added to the array, it must be resized. And the resize operation is Θ(n), so the insert operation also becomes Θ(n).

The common procedure is to duplicate (or triplicate, etc) the array size every time it has to be resized, with this, the resize operation is still Θ(n), but the amortized insertion cost is Θ(1).

Also, usually the capacity is separated from the size, because the capacity is an implementation detail, while the size is part of the interface of the array.

And you may want to verify, when elements are removed, if the capacity is too big, and if so, decrease it, otherwise, once it gets big, that space will never be released.

You can see more about it here: http://en.wikipedia.org/wiki/Dynamic_array

The problem with this approach is that you resize to just the size needed. This would mean that when you insert a new element the time needed to do it varies a lot.

So for example if you keep doing a "push_back" like operation then you would reallocate all the time.

An alternative idea is to allocate extra size to avoid frequent reallocations that cost a lot regarding performance

Vector for example allocate extra size to have an amortisized redimensionning constant.

Here is a link that exaplains it in detail

Vector in the stl use this method to be more effiicient.

Amortized analysis of std::vector insertion

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top