Question

I am very curious about that. If you use AtomicIntegerArray with values 0 and 1 you can accomplish the same thing of an AtomicBoolean array. Example:

final AtomicIntegerArray array1 = new AtomicIntegerArray(10);
array1.compareAndSet(3, 0, 1); // 0 = false and 1 = true

// exactly the same thing of:

final AtomicBoolean[] array2 = new AtomicBoolean[10];
for(int i = 0; i < array2.length; i++)
     array2[i] = new AtomicBoolean(false);
array2[3].compareAndSet(false, true);

Which one do you think is faster and better?

Was it helpful?

Solution

Which one do you think is faster and better?

Interesting question. The speed of this would probably only be visible if you are doing some very large number of cycles. Otherwise worrying about it smacks as premature optimization. I would pick the pattern that is the cleanest and most easily maintained.

Under the covers, both methods use the Unsafe.compareAndSwapInt(...) so the performance may be very similar. Since there is no blocking with accessing of volatile storage, this is not about collisions. The AtomicBoolean array will certain have a larger number of objects associated with it – each with their own volatile storage. Also, under the covers the AtomicBoolean stores the boolean value as an int so no savings there.

My instinct tells me to use the AtomicIntegerArray. Less code for you to write which typically means more reliance on the JDK to do the right thing. To figure it out you would have to test some large number of iterations on your production architecture to know for sure. I suspect the difference is going to be negligible and hard to measure.

Not a great answer but hopefully something helpful here.

Edit:

So I just ran some tests and I can't see any significant differences. Here's my little test program. It used 100 threads and ran 10 million iterations and they were within 0-10% of each other. As @mttdbrd points out, this is in no way a "real life" test. Only benching this in production with the code actually functioning like it should before you truly know if there is a difference.

Edit:

Ok after tweaking my program to make sure I warmed up the hotspot compiler per @mttdbrd's document, and changing the program to be able to better tune the number of entries, I see some interesting results.

With 1000 elements in the arrays:

AtomicIntegerArray in 4224 millis
AtomicBoolean[]    in 3546 millis    (always a little bit faster)

However with 10 elements in the array:

AtomicIntegerArray in 26506 millis
AtomicBoolean[]    in 13263 millis  (much faster)

Notice also the speed difference in general. It makes sense since there is more thread contention. 100 threads are much more likely to have to spin with 10 elements instead of 1000.

What does this mean? That if you change from one to the other you might save yourself at most 1 nanosecond per operation. Might. So instead of worrying about the performance of the two, you should pick the pattern that is the cleanest and most easily maintained.

OTHER TIPS

Actually watching the implementation of AtomicIntegerArray

http://fuseyism.com/classpath/doc/java/util/concurrent/atomic/AtomicIntegerArray-source.html

it seem that it is managed with more attention then I thought.

It doesn't use Objects to store the values, making it more efficient in memory. In fact it uses a simple int[] and then access them in a safe way.

So I think that if you need to use many AtomicInteger it is better to use the AtomicIntegerArray.

AtomicIntegerArray: uses the Unsafe class to make atomic access to a single int[] in the AtomicIntegerArray

AtomicBoolean[]: every single object of the array has it's object(itself) for making atomic access

So I would expect a better performance in a heavy concurrent threaded environment with an AtomicBoolean[], with more memory consumption than the AtomicIntegerArray.

I'd say that both are equally performant, except when heavily contended. That (as Gray's benchmark show), the AtomicBoolean[] wins over AtomicIntegerArray easily. What's missing is the explanation:

While AtomicIntegerArray places all ints next to each other as it operates on its internal int[], while AtomicBoolean[] is an array of int containing objects. These objects add an overhead of few (8 or 12) bytes, so that the underlying ints are not tightly packed.

So they span a different number of cache lines and here False sharing comes into play. As the cache line is typically 64 bytes, the whole data of new AtomicIntegerArray(10) fits into it (unless it starts unaligned and then two cache lines get used). This means a 100% probability of false sharing, i.e., it's like all threads contented for a single variable.

With the overhead of AtomicBoolean[], we get something like 160 bytes instead of 40 and therefore much less false sharing.


I guess that Gray's benchmark has quite some overhead (% operations and conditions) and that the real speed difference would be bigger.


This doesn't mean that AtomicIntegerArray is bad. It's just that it shouldn't be used like this if really heavily contended. The simple solution would be to allocate a much larger array and use only every 16th element, effectively reducing false sharing to zero.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top