C++ Memory Barriers for Atomics
Solution 1:
Both MemoryBarrier
(MSVC) and _mm_mfence
(supported by several compilers) provide a hardware memory fence, which prevents the processor from moving reads and writes across the fence.
The main difference is that MemoryBarrier has platform specific implementations for x86, x64 and IA64, where as _mm_mfence specifically uses the mfence
SSE2 instruction, so it's not always available.
On x86 and x64 MemoryBarrier is implemented with a xchg
and lock or
respectively, and I have seen some claims that this is faster than mfence. However my own benchmarks show the opposite, so apparently it's very much dependent on processor model.
Another difference is that mfence can also be used for ordering non-temporal stores/loads (movntq
etc).
GCC also has __sync_synchronize
which generates a hardware fence.
asm volatile ("" : : : "memory")
in GCC and _ReadWriteBarrier
in MSVC only provide a compiler level memory fence, preventing the compiler from reordering memory accesses. That means the processor is still free to do reordering.
Compiler fences are generally used in combination with operations that have some kind of implicit hardware fence. E.g. on x86/x64 all stores have a release fence and loads have an acquire fence, so you just need a compiler fence when implementing load-acquire and store-release.
Solution 2:
See my answer here on the hardware level semantics of fences. What is not mentioned there is that they also prevent reordering of loads, stores or loads & stores(depending on the fence) across fences, at both compiler level and hardware level.