Question

In the following link:

http://www.baptiste-wicht.com/2012/12/cpp-benchmark-vector-list-deque/

If you scroll down to the section where he is comparing the sorts he first shows the results with an 8 byte data type, comparing list, vector and deque. For 8 bytes (and 128 bytes) data type the list is much slower than the vector and deque. However, at the bottom of the sort comparison he uses a 16 byte data type and suddenly the list is faster.

How can the list be slower for 8 and 128 bytes, but faster for a value in between (i.e. 16 bytes)??

EDIT: I noticed the same pattern in the random_insert section. List is slower than vector and deque for 8 byte and 32 byte data types, but MUCH faster for 16 byte?

Was it helpful?

Solution

Note that the 16 byte data type is not a "trivial 16 byte data type", it states that:

The non-trivial data type is made of two longs and has very stupid assignment operator and copy constructor that just does some maths (totally meaningless but costly). One may argue that is not a common copy constructor neither a common assignment operator and one will be right, however, the important point here is that it is costly operators which is enough for this benchmark.

So this has nothing to do with 16 bytes being a magic number that makes list go slow, but its about the operator overloading on the non-trivial type that makes it go slow.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top