Question

When I write a value into a field, what guarantees do I get regarding when the new value will be saved in the main memory? For example, how do I know that the processor don't keep the new value in it's private cache, but updated the main memory?
Another example:

int m_foo;

void Read() // executed by thread X (on processor #0)
{
   Console.Write(m_foo);
}

void Write() // executed by thread Y (on processor #1)
{
   m_foo = 1;
}

Is there a possibility that after Write() was finished executing, some other thread executes Read() but actually will see "0" as the current value? (since perhaps the previous write to m_foo wasn't flushed yet?).
What kind of primitives (beside locks) are available to ensure the the writes were flushed?


EDIT
In the code sample I've used, the write and read are placed at different method. Doesn't Thread.MemoryBarrier only affect instruction reording that exist in the same scope?

Also, let's assume that they won't be inlined by the JIT, how can I make sure that the value written to m_foo won't be stored in a register, but to the main memory? (or when m_foo is read, it won't get an old value from the CPU cache).

Is it possible to achieve this without using locks or the 'volatile' keyword? (also, let's say I'm not using primitive types, but a WORD sized structs [so volatile cannot be applied].)

Was it helpful?

Solution

Volatile and Interlocked have already been mentioned, you asked for primitives, one addition to the list is to use Thread.MemoryBarrier() before your writes or reads. This guarantees no reordering is done of memory writes and reads.

This is doing "by hand" what lock, Interlocked and volatile can do automatically most of the time. You could use this as a full replacement to any other technique, but it is arguably the hardest path to travel, and so says MSDN:

"It is difficult to build correct multithreaded programs by using MemoryBarrier. For most purposes, the C# lock statement, the Visual Basic SyncLock statement, and the methods of the Monitor class provide easier and less error-prone ways to synchronize memory accesses. We recommend that you use them instead of MemoryBarrier. "

How to use MemoryBarrier

A very fine example are the implementations of VolatileRead and VolatileWrite, that both internally use MemoryBarrier. The basic rule of thumb to follow is: when you read a variable, place a memory barrier after the read. When you write the value, the memory barrier must come before the write.

In case you've doubts whether this is less efficient then lock, consider that locking is nothing more then "full fencing", in that it places a memory barrier before and after the code block (ignoring Monitor for a moment). This principle is well explained in this excellent definitive article on threads, locking, volatile and memory barriers by Albahari.

From reflector:

public static void VolatileWrite(ref byte address, byte value)
{
    MemoryBarrier();
    address = value;
}

public static byte VolatileRead(ref byte address)
{
    byte num = address;
    MemoryBarrier();
    return num;
}

OTHER TIPS

If you want to ensure it is written promptly and in-order, then mark it as volatile, or (with more pain) use Thread.VolatileRead / Thread.VolatileWrite (not an attractive option, and easy to miss one, making it useless).

volatile int m_foo;

Otherwise you have virtually no guarantees of anything (as soon as you talk multiple threads).

You might also want to look at locking (Monitor) or Interlocked, which would achieve the same effect as long as the same approach is used from all access (i.e. all lock, or all Interlocked, etc).

As long as you don't use any synchronisation you have no guarantee that a thread running on one processor sees the changes made by another thread running on another processor. That's because the value could be cached in the CPU caches or in a CPU register.

Therefore you need to either mark the variable as volatile. That will create a 'Happens-Before'-realation between reads an writes.

That's not a processor cache issue. Writes are usually pass-through (writes go both to cache and main memory) and all reads will access to cache. But there is many other caches on the way (programming language, libraries, operating system, I/O buffers, etc.). The compiler can also choose to keep a variable in a processor register and to never write it to main memory (that's what the volatile operator is designed for, avoid storing value in register when it can be a memory mapped I/O).

If you have multiple processes or multiple threads and synchronisation is an issue you must do it explictly, there is many way to do it depending on the use case.

For a single threaded program, do not care, the compiler will do what it must and reads will access to what has been written.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top