Update
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27643 91177308-0d34-0410-b5e6-96231b3b80d8
diff --git a/lib/Target/X86/README.txt b/lib/Target/X86/README.txt
index 08dcf5a..f589917 100644
--- a/lib/Target/X86/README.txt
+++ b/lib/Target/X86/README.txt
@@ -191,6 +191,18 @@
should be made smart enough to cannonicalize the load into the RHS of a compare
when it can invert the result of the compare for free.
+How about intrinsics? An example is:
+ *res = _mm_mulhi_epu16(*A, _mm_mul_epu32(*B, *C));
+
+compiles to
+ pmuludq (%eax), %xmm0
+ movl 8(%esp), %eax
+ movdqa (%eax), %xmm1
+ pmulhuw %xmm0, %xmm1
+
+The transformation probably requires a X86 specific pass or a DAG combiner
+target specific hook.
+
//===---------------------------------------------------------------------===//
LSR should be turned on for the X86 backend and tuned to take advantage of its