Question

Oracle Java documentation says:

Instances of java.util.Random are threadsafe. However, the concurrent use of the same java.util.Random instance across threads may encounter contention and consequent poor performance. Consider instead using ThreadLocalRandom in multithreaded designs.

What might be the reason behind poor performance?

Was it helpful?

Solution

Internally, java.util.Random keeps an AtomicLong with the current seed, and whenever a new random number is requested, there is contention in updating the seed.

From the implementation of java.util.Random:

protected int next(int bits) {
    long oldseed, nextseed;
    AtomicLong seed = this.seed;
    do {
        oldseed = seed.get();
        nextseed = (oldseed * multiplier + addend) & mask;
    } while (!seed.compareAndSet(oldseed, nextseed));
    return (int)(nextseed >>> (48 - bits));
}

On the other hand, ThreadLocalRandom ensures that the seed is updated without facing any contention, by having one seed per thread.

OTHER TIPS

The random class holds a synchronisation lock around the internal state, such that only one thread can access it at once - specifically, it uses AtomicLong. This means that if you attempt to read from it with multiple threads, only one thread can access it at once, causing other threads to wait until the lock is released.

ThreadLocalRandom can be used instead to provide transparent per-thread instancing, to ensure that the internal state is updated on a per-thread basis, thus avoiding the lock.

Note that, if implemented correctly, the AtomicLong update operation shouldn't perform horribly unless you're running a huge number of threads, as it can essentially be optimised down within the JVM to something like lock xchg on x86. The major computational cost, outside of the lock, is likely a combination of the long multiply and rotational shift.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top