Question

Am I right in thinking that in the absence of other thread synchronization, all access to shared mutable state must use some form of lower-level thread safety (such as memory barriers) to avoid operation re-organisation (whether it be via the compiler, JIT, or CPU)?

For example, in C#, must I use a Volatile.Read(ref someVar) and Volatile.Write(ref someVar, someValue) everywhere I don't use any other sort of synchronization?

I get the feeling that the answer is yes, I must. But it seems... Excessive.


A note, before anyone chimes in: I'm not talking about thread-safety or concurrency here; just memory consistency. I'm well aware that memory consistency is not the only concern when writing multi-threaded code.

Was it helpful?

Solution

First of all, if you declare variables volatile you can just read/write them normally and get the memory barriers.

To the actual question: Depends how you define "memory consistency". Depending on your use case you don't have to synchronize every write/read, you could get by with just a single one to make sure several things are published at once:

// T1:
foo = 5 // non-volatile
bar = 7 // non-volatile
done = True // volatile write

// T2:
if (done) { // volatile read
     foo and bar guaranteed to be 5, respectively 7 with non-volatile reads
}

But yes you still need at least that one memory barrier per thread there otherwise your code would be "faulty". But then that depends on how you define "faulty".

One (well the only really) example I can think of is a relaxed CAS without memory guarantees which allows to implement performance counters that do not drop updates and don't force ordering. Source for that particular idea, see here (last paragraph before build.java)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top