[ARM] Fix ASID version switch

Close a hole in the ASID version switch, particularly the following
scenario:

CPU0 MM PID			CPU1 MM PID
	idle
				  A	pid(A)
				  A	idle(lazy tlb)
		* new asid version triggered by B *
  B	pid(B)
  A	pid(A)
		* MM A gets new asid version *
  A	idle(lazy tlb)
				  A	pid(A)
		* CPU1 doesn't see the new ASID *

The result is that CPU1 continues running with the hardware set
for the original (stale) ASID value, but mm->context.id contains
the new ASID value.  The result is that the next MM fault on CPU1
updates the page table entries, but flush_tlb_page() fails due to
wrong ASID.

There is a related case with a threaded application is allocated
a new ASID on one CPU while another of its threads is running on
some different CPU.  This scenario is not fixed by this commit.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c
index 9da43a0..930c04c 100644
--- a/arch/arm/mm/context.c
+++ b/arch/arm/mm/context.c
@@ -14,7 +14,8 @@
 #include <asm/mmu_context.h>
 #include <asm/tlbflush.h>
 
-unsigned int cpu_last_asid = { 1 << ASID_BITS };
+static DEFINE_SPINLOCK(cpu_asid_lock);
+unsigned int cpu_last_asid = ASID_FIRST_VERSION;
 
 /*
  * We fork()ed a process, and we need a new context for the child
@@ -31,15 +32,16 @@
 {
 	unsigned int asid;
 
+	spin_lock(&cpu_asid_lock);
 	asid = ++cpu_last_asid;
 	if (asid == 0)
-		asid = cpu_last_asid = 1 << ASID_BITS;
+		asid = cpu_last_asid = ASID_FIRST_VERSION;
 
 	/*
 	 * If we've used up all our ASIDs, we need
 	 * to start a new version and flush the TLB.
 	 */
-	if ((asid & ~ASID_MASK) == 0) {
+	if (unlikely((asid & ~ASID_MASK) == 0)) {
 		asid = ++cpu_last_asid;
 		/* set the reserved ASID before flushing the TLB */
 		asm("mcr	p15, 0, %0, c13, c0, 1	@ set reserved context ID\n"
@@ -48,6 +50,8 @@
 		isb();
 		flush_tlb_all();
 	}
+	spin_unlock(&cpu_asid_lock);
 
+	mm->cpu_vm_mask = cpumask_of_cpu(smp_processor_id());
 	mm->context.id = asid;
 }