ARM FastISel integer sext/zext improvements
My recent ARM FastISel patch exposed this bug:
http://llvm.org/bugs/show_bug.cgi?id=16178
The root cause is that it can't select integer sext/zext pre-ARMv6 and
asserts out.
The current integer sext/zext code doesn't handle other cases gracefully
either, so this patch makes it handle all sext and zext from i1/i8/i16
to i8/i16/i32, with and without ARMv6, both in Thumb and ARM mode. This
should fix the bug as well as make FastISel faster because it bails to
SelectionDAG less often. See fastisel-ext.patch for this.
fastisel-ext-tests.patch changes current tests to always use reg-imm AND
for 8-bit zext instead of UXTB. This simplifies code since it is
supported on ARMv4t and later, and at least on A15 both should perform
exactly the same (both have exec 1 uop 1, type I).
2013-05-31-char-shift-crash.ll is a bitcode version of the above bug
16178 repro.
fast-isel-ext.ll tests all sext/zext combinations that ARM FastISel
should now handle.
Note that my ARM FastISel enabling patch was reverted due to a separate
failure when dealing with MCJIT, I'll fix this second failure and then
turn FastISel on again for non-iOS ARM targets.
I've tested "make check-all" on my x86 box, and "lnt test-suite" on A15
hardware.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@183551 91177308-0d34-0410-b5e6-96231b3b80d8
diff --git a/test/CodeGen/ARM/fast-isel-intrinsic.ll b/test/CodeGen/ARM/fast-isel-intrinsic.ll
index bc9769a..d2d208b 100644
--- a/test/CodeGen/ARM/fast-isel-intrinsic.ll
+++ b/test/CodeGen/ARM/fast-isel-intrinsic.ll
@@ -17,7 +17,7 @@
; ARM: add r0, r0, #5
; ARM: movw r1, #64
; ARM: movw r2, #10
-; ARM: uxtb r1, r1
+; ARM: and r1, r1, #255
; ARM: bl {{_?}}memset
; ARM-LONG: t1
; ARM-LONG: movw r3, :lower16:L_memset$non_lazy_ptr
@@ -32,7 +32,7 @@
; THUMB: movt r1, #0
; THUMB: movs r2, #10
; THUMB: movt r2, #0
-; THUMB: uxtb r1, r1
+; THUMB: and r1, r1, #255
; THUMB: bl {{_?}}memset
; THUMB-LONG: t1
; THUMB-LONG: movw r3, :lower16:L_memset$non_lazy_ptr