IRgen/ABI/ARM: Trust the backend to pass vectors correctly for the given ABI.
- Therefore, we can lower out the NEON wrapper structs and pass the vectors
directly. This makes a huge difference in the cleanliness of the IR after
optimization.
- I will trust, but verify, via future ABITest testing (for APCS-GNU, at
least).
llvm-svn: 114618
diff --git a/clang/test/CodeGen/arm-vector-arguments.c b/clang/test/CodeGen/arm-vector-arguments.c
new file mode 100644
index 0000000..0d1bdd1
--- /dev/null
+++ b/clang/test/CodeGen/arm-vector-arguments.c
@@ -0,0 +1,13 @@
+// RUN: %clang_cc1 -triple thumbv7-apple-darwin9 \
+// RUN: -target-abi apcs-gnu \
+// RUN: -target-cpu cortex-a8 \
+// RUN: -mfloat-abi soft \
+// RUN: -target-feature +soft-float-abi \
+// RUN: -emit-llvm -w -o - %s | FileCheck %s
+
+#include <arm_neon.h>
+
+// CHECK: define void @f0(%struct.__simd128_int8_t* sret %agg.result, <16 x i8> %{{.*}}, <16 x i8> %{{.*}})
+int8x16_t f0(int8x16_t a0, int8x16_t a1) {
+ return vzipq_s8(a0, a1).val[0];
+}