[IA64] Fix race in mm-context wrap-around logic.

The patch below should fix a race which could cause stale TLB entries.
Specifically, when 2 CPUs ended up racing for entrance to
wrap_mmu_context().  The losing CPU would find that by the time it
acquired ctx.lock, mm->context already had a valid value, but then it
failed to (re-)check the delayed TLB flushing logic and hence could
end up using a context number when there were still stale entries in
its TLB.  The fix is to check for delayed TLB flushes only after
mm->context is valid (non-zero).  The patch also makes GCC v4.x
happier by defining a non-volatile variant of mm_context_t called
nv_mm_context_t.

Signed-off-by: David Mosberger-Tang <David.Mosberger@acm.org>
Signed-off-by: Tony Luck <tony.luck@intel.com>
diff --git a/include/asm-ia64/mmu.h b/include/asm-ia64/mmu.h
index ae15253..611432b 100644
--- a/include/asm-ia64/mmu.h
+++ b/include/asm-ia64/mmu.h
@@ -2,10 +2,12 @@
 #define __MMU_H
 
 /*
- * Type for a context number.  We declare it volatile to ensure proper ordering when it's
- * accessed outside of spinlock'd critical sections (e.g., as done in activate_mm() and
- * init_new_context()).
+ * Type for a context number.  We declare it volatile to ensure proper
+ * ordering when it's accessed outside of spinlock'd critical sections
+ * (e.g., as done in activate_mm() and init_new_context()).
  */
 typedef volatile unsigned long mm_context_t;
 
+typedef unsigned long nv_mm_context_t;
+
 #endif