Locks around memory manipulation via inline assembly
I am new to the low level stuff so I am completely oblivious of what kind of problems you might face down there and I am not even sure if I understand the term "atomic" right. Right now I am trying to make simple atomic locks around memory manipulation via extended assembly. Why? For sake of curiosity. I know I am reinventing the wheel here and possibly oversimplifying the whole process.
The question? Does the code I present here achive the goal of making memory manipulation both threadsafe and reentrant?
- If it works, why?
- If it doesn't work, why?
- Not good enough? Should I for example make use of the register keyword in C?
What I simply want to do...
- Before memory manipulation, lock.
- After memory manipulation, unlock.
The code:
volatile int atomic_gate_memory = 0;
static inline void atomic_open(volatile int *gate)
{
asm volatile (
"wait:\n"
"cmp %[lock], %[gate]\n"
"je wait\n"
"mov %[lock], %[gate]\n"
: [gate] "=m" (*gate)
: [lock] "r" (1)
);
}
static inline void atomic_close(volatile int *gate)
{
asm volatile (
"mov %[lock], %[gate]\n"
: [gate] "=m" (*gate)
: [lock] "r" (0)
);
}
Then something like:
void *_malloc(size_t size)
{
atomic_open(&atomic_gate_memory);
void *mem = malloc(size);
atomic_close(&atomic_gate_memory);
return mem;
}
#define malloc(size) _malloc(size)
.. same for calloc, realloc, free and fork(for linux).
#ifdef _UNISTD_H
int _fork()
{
pid_t pid;
atomic_open(&atomic_gate_memory);
pid = fork();
atomic_close(&atomic_gate_memory);
return pid;
}
#define fork() _fork()
#endif
After loading the stackframe for atomic_open, objdump generates:
00000000004009a7 <wait>:
4009a7: 39 10 cmp %edx,(%rax)
4009a9: 74 fc je 4009a7 <wait>
4009ab: 89 10 mov %edx,(%rax)
Also, given the disassembly above; can I assume I am making an atomic operation because it is only one instruction?
Solution 1:
I think a simple spinlock that doesn't have any of the really major / obvious performance problems on x86 is something like this. Of course a real implementation would use a system call (like Linux futex
) after spinning for a while, and unlocking would have to check if it needs to notify any waiters with another system call. This is important; you don't want to spin forever wasting CPU time (and energy / heat) doing nothing. But conceptually this is the spin part of a spinlock before you take the fallback path. It's an important piece of how light-weight locking is implemented. (Only attempting to take the lock once before calling the kernel would be a valid choice, instead of spinning at all.)
Implement as much of this as you like in inline asm, or preferably using C11 stdatomic
, like this semaphore implementation. This is NASM syntax. In GNU C, make sure you use a "memory"
clobber to stop compile-time reordering of memory access (TTAS coherence issue?)
;;; UNTESTED ;;;;;;;;
;;; TODO: **IMPORTANT** fall back to OS-supported sleep/wakeup after spinning some
;;; e.g. Linux futex
; first arg in rdi as per AMD64 SysV ABI (Linux / Mac / etc)
;;;;;void spin_lock (volatile char *lock)
global spin_unlock
spin_unlock:
; movzx eax, byte [rdi] ; debug check for double-unlocking. Expect 1
mov byte [rdi], 0 ; lock.store(0, std::memory_order_release)
ret
align 16
;;;;;void spin_unlock(volatile char *lock)
global spin_lock
spin_lock:
mov eax, 1 ; only need to do this the first time, otherwise we know al is non-zero
.retry:
xchg al, [rdi]
test al,al ; check if we actually got the lock
jnz .spinloop
ret ; no taken branches on the fast-path
align 8
.spinloop: ; do {
pause
cmp byte [rdi], al ; C++11
jne .retry ; if (lock.load(std::memory_order_acquire) != 1)
jmp .spinloop
; if not translating this to inline asm, you could put the spin loop *before* the function entry point, saving the last jmp
; but since this is probably too simplistic for real use, I'm going to leave it as-is.
A plain store has release semantics, but not sequential-consistency (which you'd get from an xchg or something). Acquire/release is enough to protect a critical section (hence the name).
If you were using a bitfield of atomic flags, you could use lock bts
(test and set) for the equivalent of xchg-with-1. You can spin on bt
or test
. To unlock, you'd need lock btr
, not just btr
, because it would be a non-atomic read-modify-write of the byte, or even the containing 32-bits.
With a byte or int sized lock like you should normally use, you don't even need a lock
ed operation to unlock; release semantics are enough. glibc's pthread_spin_unlock
does it the same as my unlock function: a simple store.
(lock bts
is not necessary; xchg
or lock cmpxchg
are just as good if for a normal lock.)
The first access should be an atomic RMW
See discussion on Does cmpxchg write destination cache line on failure? If not, is it better than xchg for spinlock? - if the first access is read-only, the CPU might send out just a share request for that cache line. Then, if it sees the line unlocked (the hopefully-common low-contention case) it would have to send out an RFO (Read For Ownership) to actually be able to write the cache line. So that's twice as many off-core transactions.
The downside is that this will take MESI exclusive ownership of that cache line, but what really matters is that the thread owning the lock can efficiently store a 0
so we can see it unlocked. Either way, read-only or RMW, that core will lose exclusive ownership of the line and have to RFO before it can commit that unlocking store.
I think a read-only first access would just optimize for slightly less traffic between cores when multiple threads queue up to wait for a lock that's already taken. That would be a silly thing to optimize for.
(Fastest inline-assembly spinlock also tested the idea for a massively contended spinlock with multiple threads doing nothing but trying to take the lock, with poor results. That linked answer makes some incorrect claims about xchg
globally locking a bus - aligned lock
s don't do that, just a cache lock (Can num++ be atomic for 'int num'?), and each core can be doing a separate atomic RMW on a different cache line at the same time.)
However, if that initial attempt finds it locks, we don't want to keep hammering on the cache line with atomic RMWs. That's when we fall back to read-only. 10 threads all spamming xchg
for the same spinlock would keep the memory arbitration hardware pretty busy. It would likely delay the visibility of the store that unlocks (because that thread has to contend for exclusive ownership of the line), so it's directly counter-productive. It may also memory in general in general for other cores.
PAUSE
is also essential, to avoid mis-speculation about memory ordering by the CPU. You exit the loop only when the memory you're reading was modified by another core. However, we don't want to pause
in the un-contended case. On Skylake, PAUSE
waits a lot longer, like ~100 cycles up from ~5, so you should definitely keep the spin-loop separate from the initial check for unlocked.
I'm sure Intel's and AMD's optimization manuals talk about this, see the x86 tag wiki for that and tons of other links.
Not good enough? Should I for example make use of the register keyword in C?
register
is a meaningless hint in modern optimizing compilers, except in debug builds (gcc -O0
).