Question

I am interested in the situation where one thread is waiting for change of a variable in the while loop:

while (myFlag == false) {
    // do smth
}

It is repeating an infinite number of times.

In the meantime, another thread have changed the value of this variable:

myFlag = true;

Can the reader-thread see the result of changing the value of the variable in the other thread if this variable is NOT volatile? In general, as I understand it will never happen. Or am I wrong? Then when and under what circumstances, the first thread can see the change in the variable and exit the loop? Is this possible without using volatile keyword? Does size of processor's cache play role in this situation?

Please explain and help me understand! Thank you in advance!!

Was it helpful?

Solution

Can the reader-thread see the result of changing the value of the variable in the other thread if this variable is NOT volatile?

It may be able to, yes. It's just that it won't definitely see the change.

In general, as I understand it will never happen.

No, that's not the case.

You're writing to a variable and then reading from it in a different thread. Whether or not you see it will depend on the exact processor and memory architecture involved. Without any memory barriers involved, you aren't guaranteed to see the new value - but you're certainly not guaranteed not to see it either.

OTHER TIPS

Can the reader-thread see the result of changing the value of the variable in the other thread if this variable is NOT volatile?

I'd like to expand a bit on @Jon's excellent answer.

The Java memory model says that all memory in a particular thread will be updated if it crosses any memory barrier. Read barriers cause all cached memory in a particular thread to be updated from central memory and write barriers cause local thread changes to be written to central.

So if your thread that writes to another volatile field or enters a synchronized block it will cause your flag to be updated in central memory. If the reading thread reads from another volatile field or enters a synchronized block in the // do smth section after the update has happened, it will see the update. You just can't rely on when this will happen or if the order of write/read happens appropriately. If your thread doesn't have other memory synchronization points then it may never happen.

Edit:

Given the discussion below, which I've had a couple times now in various different questions, I thought I might expand more on my answer. There is a big difference between the guarantees provided by the Java language and its memory-model and the reality of JVM implementations. The JLS and JMM define memory-barriers and talk about "happens-before" guarantees only between volatile reads and writes on the same field and synchronized locks on the same object.

However, on all architectures that I've heard of, the implementation of the memory barriers that enforce the memory synchronization are not field or object specific. When a read is done on a volatile field and the read-barrier is crossed on a specific thread, it will be updated with all of central memory, not just the particular volatile field in question. This is the same for volatile writes. After a write is made to a volatile field, all updates from the local thread are written to central memory, not just the field. What the JLS does guarantee is that instructions cannot be reordered past the volatile access.

So, if thread-A has written to a volatile field then all updates, even those not marked as volatile will have been written to central memory. After this operation has completed, if thread-B then reads from a different volatile field, he will see all of thread-A's updates, even those not marked as volatile. Again, there is no guarantees around the timing of these events but if they happen in that order then the two threads will be updated.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top