locking/xchg/alpha: Remove superfluous memory barriers from the _local() variants
The following two commits:
79d442461df74 ("locking/xchg/alpha: Clean up barrier usage by using smp_mb() in place of __ASM__MB")
472e8c55cf662 ("locking/xchg/alpha: Fix xchg() and cmpxchg() memory ordering bugs")
... ended up adding unnecessary barriers to the _local() variants on Alpha,
which the previous code took care to avoid.
Fix them by adding the smp_mb() into the cmpxchg() macro rather than into the
____cmpxchg() variants.
Reported-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-alpha@vger.kernel.org
Fixes: 472e8c55cf662 ("locking/xchg/alpha: Fix xchg() and cmpxchg() memory ordering bugs")
Fixes: 79d442461df74 ("locking/xchg/alpha: Clean up barrier usage by using smp_mb() in place of __ASM__MB")
Link: http://lkml.kernel.org/r/1519704058-13430-1-git-send-email-parri.andrea@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
diff --git a/arch/alpha/include/asm/cmpxchg.h b/arch/alpha/include/asm/cmpxchg.h
index 8a2b331..6c7c394 100644
--- a/arch/alpha/include/asm/cmpxchg.h
+++ b/arch/alpha/include/asm/cmpxchg.h
@@ -38,19 +38,31 @@
#define ____cmpxchg(type, args...) __cmpxchg ##type(args)
#include <asm/xchg.h>
+/*
+ * The leading and the trailing memory barriers guarantee that these
+ * operations are fully ordered.
+ */
#define xchg(ptr, x) \
({ \
+ __typeof__(*(ptr)) __ret; \
__typeof__(*(ptr)) _x_ = (x); \
- (__typeof__(*(ptr))) __xchg((ptr), (unsigned long)_x_, \
- sizeof(*(ptr))); \
+ smp_mb(); \
+ __ret = (__typeof__(*(ptr))) \
+ __xchg((ptr), (unsigned long)_x_, sizeof(*(ptr))); \
+ smp_mb(); \
+ __ret; \
})
#define cmpxchg(ptr, o, n) \
({ \
+ __typeof__(*(ptr)) __ret; \
__typeof__(*(ptr)) _o_ = (o); \
__typeof__(*(ptr)) _n_ = (n); \
- (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_, \
- (unsigned long)_n_, sizeof(*(ptr)));\
+ smp_mb(); \
+ __ret = (__typeof__(*(ptr))) __cmpxchg((ptr), \
+ (unsigned long)_o_, (unsigned long)_n_, sizeof(*(ptr)));\
+ smp_mb(); \
+ __ret; \
})
#define cmpxchg64(ptr, o, n) \