Question

While looking at Clang and g++ C++11 implementation status I noticed something strange:
they support C++11 atomics, but they dont support C++11 memory model.
I was under impression that you must have C++11 memory model to use atomics. So what exactly is the difference between support for atomics and memory model?
Does a lack of memory model support means that legal C++11 programs that use std::atomic<T> arent seq consistent?

references:
http://clang.llvm.org/cxx_status.html
http://gcc.gnu.org/gcc-4.7/cxx0x_status.html

Was it helpful?

Solution

One of the issues is the definition of "memory location", that allows (and forces the compiler to support) locking different structure members by different locks. There is a discussion about a RL problem caused by this.

Basically the issue is that having a struct defined like this:

struct x {
    long a;
    unsigned int b1;
    unsigned int b2:1;
};

the compiler is free to implement writing to b2 by overwriting b1 too (and apparently, judging from the report, it does). Therefore, the two fields have to be locked as one. However, as a consequence of the C++11 memory model, this is forbidden (well, not really forbidden, but the compiler must ensure simultaneous updates to b1 and b2 do not interfere; it could do it by locking or CAS-ing each such update, well, life is difficult on some architectures). Quoting from the report:

I've raised the issue with our GCC guys and they said to me that: "C does not provide such guarantee, nor can you reliably lock different structure fields with different locks if they share naturally aligned word-size memory regions. The C++11 memory model would guarantee this, but that's not implemented nor do you build the kernel with a C++11 compiler."

Nice info can also be found in the wiki.

OTHER TIPS

I guess the "Lack of memory model" in these cases just means that the optimizers were written before the C++11 memory model got published, and might perform now invalid optimizations. It's very difficult and time-consuming to validate optimizations against the memory model, so it's no big surprise that the clang/gcc teams haven't finished that yet.

Does a lack of memory model support means that legal C++11 programs that use std::atomic arent seq consistent?

Yes, that's a possibility. It's even worse: the compiler might introduce data races into (according to the C++11 standard) race-free programs, e.g. by introducing speculative writes.

For example, several C++ compilers used to perform this optimization:

for (p = q; p = p -> next; ++p) {
    if (p -> data > 0) ++count;
}

Could get optimized into:

register int r1 = count;
for (p = q; p = p -> next; ++p) {
    if (p -> data > 0) ++r1;
}
count = r1;

If all p->data are non-negative, the original source code did not write to count, but the optimized code does. This can introduce a data race in an otherwise race-free program, so the C++11 specification disallows such optimizations. Existing compilers now have to verify (and adjust if necessary) all optimizations.

See Concurrency memory model compiler consequences for details.

It's not so much that they don't support the memory model, but that they don't (yet) support the API in the Standard for interacting with the memory model. That API includes a number of mutexes.

However, both Clang and GCC have been as thread aware as possible without a formal standard for some time. You don't have to worry about optimizations moving things to the wrong side of atomic operations.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top