arm64: klib: bitops: fix unpredictable stxr usage
We're currently relying on unpredictable behaviour in our testops
(test_and_*_bit), as stxr is unpredictable when the status register and
the source register are the same
This patch changes reallocates the status register so as to bring us back into
the realm of predictable behaviour. Boot tested on an AEMv8 model.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
diff --git a/arch/arm64/lib/bitops.S b/arch/arm64/lib/bitops.S
index fd1e801..eaed8bb 100644
--- a/arch/arm64/lib/bitops.S
+++ b/arch/arm64/lib/bitops.S
@@ -50,8 +50,8 @@
1: ldxr x2, [x1]
lsr x0, x2, x3 // Save old value of bit
\instr x2, x2, x4 // toggle bit
- stxr w2, x2, [x1]
- cbnz w2, 1b
+ stxr w5, x2, [x1]
+ cbnz w5, 1b
smp_dmb ish
and x0, x0, #1
3: ret