In theory atomic variable swap could be implemented on a few CPUs with DCAS support. In practice no modern CPU has DCAS support, so it's not possible.
How to atomically perform sequential load and store operations?
-
04-06-2022 - |
Question
Consider this code under GCC 4.8.0:
std::atomic<bool> a;
std::atomic<bool> b;
a.store( b.load() ); // want to be atomic
How can I make the line above to be atomic as whole? In other words, how to obtain atomic assignment of atomic variables?
Are there any alternatives for std::atomic
which allow this?
I have found __transaction_atomic {/* any code goes here */}
which is activated on GCC by -fgnu-tm
. With this, one can write anything in the block and it will be performed atomically.
Now the question are:
Is __transaction_atomic
implemented with mutexes? If yes, then what the mutex actually locks?
Does the implementation of __transaction_atomic
change depending on what is in it's block? If yes, then how it changes?
Solution 2
OTHER TIPS
I do not think that is possible. I do not think it is useful to have such operation. Why do you want it? If you have such hard requirement then you should just use std::mutex
locked around a = b
assignment.
UPDATE
I have tested the __transaction_atomic
block with Cygwin64's GCC 4.8.1 and this very short source
extern int a, b;
void foo ()
{
__transaction_atomic
{
a = b;
}
}
results int oodles of instruction calling ITM library functions
_Z3foov:
.LFB0:
pushq %rdi #
.seh_pushreg %rdi
pushq %rsi #
.seh_pushreg %rsi
subq $200, %rsp #,
.seh_stackalloc 200
movaps %xmm6, 32(%rsp) #,
.seh_savexmm %xmm6, 32
movaps %xmm7, 48(%rsp) #,
.seh_savexmm %xmm7, 48
movaps %xmm8, 64(%rsp) #,
.seh_savexmm %xmm8, 64
movaps %xmm9, 80(%rsp) #,
.seh_savexmm %xmm9, 80
movaps %xmm10, 96(%rsp) #,
.seh_savexmm %xmm10, 96
movaps %xmm11, 112(%rsp) #,
.seh_savexmm %xmm11, 112
movaps %xmm12, 128(%rsp) #,
.seh_savexmm %xmm12, 128
movaps %xmm13, 144(%rsp) #,
.seh_savexmm %xmm13, 144
movaps %xmm14, 160(%rsp) #,
.seh_savexmm %xmm14, 160
movaps %xmm15, 176(%rsp) #,
.seh_savexmm %xmm15, 176
.seh_endprologue
movl $43, %edi #,
xorl %eax, %eax #
call _ITM_beginTransaction #
testb $2, %al #, tm_state.4
je .L2 #,
movq .refptr.b(%rip), %rax #, tmp67
movl (%rax), %edx # b, b
movq .refptr.a(%rip), %rax #, tmp66
movl %edx, (%rax) # b, a
movaps 32(%rsp), %xmm6 #,
movaps 48(%rsp), %xmm7 #,
movaps 64(%rsp), %xmm8 #,
movaps 80(%rsp), %xmm9 #,
movaps 96(%rsp), %xmm10 #,
movaps 112(%rsp), %xmm11 #,
movaps 128(%rsp), %xmm12 #,
movaps 144(%rsp), %xmm13 #,
movaps 160(%rsp), %xmm14 #,
movaps 176(%rsp), %xmm15 #,
addq $200, %rsp #,
popq %rsi #
popq %rdi #
jmp _ITM_commitTransaction #
.p2align 4,,10
.L2:
movq .refptr.b(%rip), %rcx #,
call _ITM_RU4 #
movq .refptr.a(%rip), %rcx #,
movl %eax, %edx # D.2368,
call _ITM_WU4 #
call _ITM_commitTransaction #
nop
movaps 32(%rsp), %xmm6 #,
movaps 48(%rsp), %xmm7 #,
movaps 64(%rsp), %xmm8 #,
movaps 80(%rsp), %xmm9 #,
movaps 96(%rsp), %xmm10 #,
movaps 112(%rsp), %xmm11 #,
movaps 128(%rsp), %xmm12 #,
movaps 144(%rsp), %xmm13 #,
movaps 160(%rsp), %xmm14 #,
movaps 176(%rsp), %xmm15 #,
addq $200, %rsp #,
popq %rsi #
popq %rdi #
ret
.seh_endproc
.ident "GCC: (GNU) 4.8.1"
.def _ITM_beginTransaction; .scl 2; .type 32; .endef
.def _ITM_commitTransaction; .scl 2; .type 32; .endef
.def _ITM_RU4; .scl 2; .type 32; .endef
.def _ITM_WU4; .scl 2; .type 32; .endef
.section .rdata$.refptr.b, "dr"
.globl .refptr.b
.linkonce discard
.refptr.b:
.quad b
.section .rdata$.refptr.a, "dr"
.globl .refptr.a
.linkonce discard
.refptr.a:
.quad a
This was with -O3
option.