X86/X86_64: Switch to locked add from mfence

I finally received the answers about the performance of locked add vs.
mfence for Java memory semantics.  Locked add has been faster than
mfence for all processors since the Pentium 4.  Accordingly, I have made
the synchronization use locked add at all times, removing it from an
instruction set feature.

Also add support in the optimizing compiler for barrier type
kNTStoreStore, which is used after non-temporal moves.

Change-Id: Ib47c2fd64c2ff2128ad677f1f39c73444afb8e94
Signed-off-by: Mark Mendell <mark.p.mendell@intel.com>
diff --git a/compiler/optimizing/code_generator_x86_64.cc b/compiler/optimizing/code_generator_x86_64.cc
index 056b69b..225f547 100644
--- a/compiler/optimizing/code_generator_x86_64.cc
+++ b/compiler/optimizing/code_generator_x86_64.cc
@@ -4058,8 +4058,10 @@
       // nop
       break;
     }
-    default:
-      LOG(FATAL) << "Unexpected memory barier " << kind;
+    case MemBarrierKind::kNTStoreStore:
+      // Non-Temporal Store/Store needs an explicit fence.
+      MemoryFence(/* non-temporal */ true);
+      break;
   }
 }