[X86] Clzero intrinsic and its addition under znver1

This patch does the following.

1. Adds an Intrinsic int_x86_clzero which works with __builtin_ia32_clzero
2. Identifies clzero feature using cpuid info. (Function:8000_0008, Checks if EBX[0]=1)
3. Adds the clzero feature under znver1 architecture.
4. The custom inserter is added in Lowering.
5. A testcase is added to check the intrinsic.
6. The clzero instruction is added to assembler test.

Patch by Ganesh Gopalasubramanian with a couple formatting tweaks, a disassembler test, and using update_llc_test.py from me.

Differential revision: https://reviews.llvm.org/D29385

llvm-svn: 294558
diff --git a/llvm/test/CodeGen/X86/clzero.ll b/llvm/test/CodeGen/X86/clzero.ll
new file mode 100644
index 0000000..f15d4de
--- /dev/null
+++ b/llvm/test/CodeGen/X86/clzero.ll
@@ -0,0 +1,23 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
+; RUN: llc < %s -mtriple=x86_64-linux -mattr=+clzero | FileCheck %s --check-prefix=X64
+; RUN: llc < %s -mtriple=i386-pc-linux -mattr=+clzero | FileCheck %s --check-prefix=X32
+
+define void @foo(i8* %p) #0 {
+; X64-LABEL: foo:
+; X64:       # BB#0: # %entry
+; X64-NEXT:    leaq (%rdi), %rax
+; X64-NEXT:    clzero
+; X64-NEXT:    retq
+;
+; X32-LABEL: foo:
+; X32:       # BB#0: # %entry
+; X32-NEXT:    movl {{[0-9]+}}(%esp), %eax
+; X32-NEXT:    leal (%eax), %eax
+; X32-NEXT:    clzero
+; X32-NEXT:    retl
+entry:
+  tail call void @llvm.x86.clzero(i8* %p) #1
+  ret void
+}
+
+declare void @llvm.x86.clzero(i8*) #1