سؤال

I have read through different articles such as Double-checked locking: Clever, but broken and I understand the reason of why the following code is broken in multi-threading usage.

class SomeClass {
  private Resource resource = null;
  public Resource getResource() {
    if (resource == null) {
      synchronized {
        if (resource == null) 
          resource = new Resource();
      }
    }
    return resource;
  }
}

However, according to its explanation, when a thread exits a synchronized block, it performs a write barrier -- it must flush out any variables modified in that block to main memory before releasing the lock. Therefore, when Thread A runs into the synchronized block, and then does the following process in order:

  1. Memory for the new Resource object will be allocated;
  2. the constructor for Resource will be called,
  3. initializing the member fields of the new object;
  4. the field resource of SomeClass will be assigned a reference to the newly created object

Finally, before Thread A exits from the synchronized block, it will write its local resource object back to the main memory and then Thread B will read this newly created resource from the main memory once it runs into the synchronized block.

Why Thread B may see these memory operations in a different order than the one Thread A executes? I thought Thread B won't know the resource object is created until Thread A flushes its local memory out to main memory when it exits from the synchronized block because Thread B can only read the resource object from the sharable main memory?

Please correct my understanding....Thank you.

هل كانت مفيدة؟

المحلول

Finally, before Thread A exits from the synchronized block, it will write its local resource object back to the main memory and then Thread B will read this newly created resource from the main memory once it runs into the synchronized block.

This is where it breaks down. Because Thread B accesses resource without synchronizing there is no read barrier on its operations. It may therefore see a stale cached copy of memory for the resource cell or (a bit later) for the cell corresponding to some field of the Resource instance.

Costi Ciudatu's fix is correct for Java versions >= 5.0. But for versions older than that the semantics of volatile did not guarantee that all changes would be flushed through from A to main memory to B.

نصائح أخرى

The article you quoted refers to the Java memory model prior to Java 5.0.

In Java 5.0+, your resource must be declared volatile for that to work. Even if the changes are flushed to the main memory, there's no guarantee (other than volatile) that thread B will read the new value from the main memory rather than its own local cache (where the value is null).

In previous versions, volatile did not impose strict restrictions on reordering so double checked locking was not guaranteed to work properly.

I am not going to say more then the others already did, but because this is such an often used pattern, why not just make an utility method for it? Like:Suppliers Memoize

Is "Double-checked locking" one of the memes which won't die. IMHO using an enum is much smarter (As suggested by Josh Bloch in Effective Java 2nd edition)

enum SomeClass {
    INSTANCE; // thread safe and lazy loaded
}

The bug you are referring to was fixed in Java 5.0, in 2004.

In short, a) don't use it b) use a version of Java 5.0+ c) don't use really old unsupported versions of Java and take really, really old articles (2001) with a grain of salt.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top