Question

I'm trying to determine when it's more efficient to List<T>.Add() versus using the Array.Resize() method.

The documentation for Array.Resize says it makes a copy of the entire array, and places it into a new object. The old object would have to be discarded. Where does this old object reside? On the stack or the heap?

I don't know how List.Add() works.

Does anyone know how the List.Add method compares to the static Array.Resize method?

I'm interested in memory usage (and cleanup), and what is better for 300 value types, versus 20,000 value types.

For what it's worth, I'm planning on running this code on one of the embedded flavors of .NET. Potentially the .NET Gadgeteer

Was it helpful?

Solution

You should use a List<T>.

Using Array.Resize will force you to expand the array separately each time you add an item, making your code much slower. (since arrays cannot have spare capacity)

A List<T> is backed by an array, but holds spare capacity to put items into.
All it needs to do to add an item is to set an element in the array and increase its internal size counter.
When the array gets full, the list will double its capacity, allowing future items to be added effortlessly again.

OTHER TIPS

The .NET Micro Framework does not support generics, so I will be using an array, copying and destroying it as needed.

I might compare that perfmance to the unrolled linked list mentioned in the powertools library here: Any implementation of an Unrolled Linked List in C#?

The .NET Micro Framework does not (yet) support generics. You're limited with regard to dynamic collections.

One thing to consider when picking your approach is that managed code on the microcontroller is very, very slow. Many operations in the managed .NET Micro Framework objects are actually just calling into native code to do the work. This is much faster.

For instance, compare copying an array element by element in a for loop versus calling Array.Copy() which essentially does the same thing but in native code.

Where possible, use these native extensions to get better performance. Also consider taking a look at the MicroLinq project on CodePlex. There is a sub project devoted just to enhanced collections on NETMF (also available as a NuGet package). The code is freely available and openly licensed for any purpose. (Full disclosure: I'm the dev of that project.)

If you can get away with allocating a large array and keeping track of the max position at which real data has been saved, this would be the fastest but requires more work / thought put into the design and takes away time from building the cool stuff.

List will only be faster if you resize the array frequently, for example every time you add an item. However if you resize it every few frames, List and built-in arrays should be equivalent, perhaps arrays still faster.

I've seen List implementation after decompilation and found that it uses Array.Resize() for internal array. But it manages elements counter and uses array's length as capacity and resizes the array with some extra space when you call Add(). So, I guess you can develop more optimal allocation strategy than List for your case. But you will have to manage elements counter manually. Also you'll get rid of indexers overhead when accessing array elements, because indexers inside List are just methods that request internal array elements. I think it is worth to replace List by array with manual resize if it is bottleneck only.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top