arm64: spinlock: fix ll/sc unlock on big-endian systems
When unlocking a spinlock, we perform a read-modify-write on the owner
ticket in order to increment it and store it back with release
semantics.
In the LL/SC case, we load the 16-bit ticket using a 32-bit load and
therefore store back the wrong halfword on a big-endian system,
corrupting the lock after the first unlock and killing the system dead.
This patch fixes the unlock code to use 16-bit accessors consistently.
Signed-off-by: Will Deacon <will.deacon@arm.com>
diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
index 87ae7ef..c85e96d 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -110,7 +110,7 @@
asm volatile(ARM64_LSE_ATOMIC_INSN(
/* LL/SC */
- " ldr %w1, %0\n"
+ " ldrh %w1, %0\n"
" add %w1, %w1, #1\n"
" stlrh %w1, %0",
/* LSE atomics */