blob: e9133721f48f92e602f7ae9c0653f646ebdf0f3d [file] [log] [blame]
Dan Gohmanf17a25c2007-07-18 16:29:46 +00001Target Independent Opportunities:
2
3//===---------------------------------------------------------------------===//
4
Chris Lattner01244212008-01-07 07:46:23 +00005We should make the various target's "IMPLICIT_DEF" instructions be a single
6target-independent opcode like TargetInstrInfo::INLINEASM. This would allow
7us to eliminate the TargetInstrDesc::isImplicitDef() method, and would allow
8us to avoid having to define this for every target for every register class.
9
10//===---------------------------------------------------------------------===//
11
Dan Gohmanf17a25c2007-07-18 16:29:46 +000012With the recent changes to make the implicit def/use set explicit in
13machineinstrs, we should change the target descriptions for 'call' instructions
14so that the .td files don't list all the call-clobbered registers as implicit
15defs. Instead, these should be added by the code generator (e.g. on the dag).
16
17This has a number of uses:
18
191. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
20 for their different impdef sets.
212. Targets with multiple calling convs (e.g. x86) which have different clobber
22 sets don't need copies of call instructions.
233. 'Interprocedural register allocation' can be done to reduce the clobber sets
24 of calls.
25
26//===---------------------------------------------------------------------===//
27
28Make the PPC branch selector target independant
29
30//===---------------------------------------------------------------------===//
31
32Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
33precision don't matter (ffastmath). Misc/mandel will like this. :)
34
35//===---------------------------------------------------------------------===//
36
37Solve this DAG isel folding deficiency:
38
39int X, Y;
40
41void fn1(void)
42{
43 X = X | (Y << 3);
44}
45
46compiles to
47
48fn1:
49 movl Y, %eax
50 shll $3, %eax
51 orl X, %eax
52 movl %eax, X
53 ret
54
55The problem is the store's chain operand is not the load X but rather
56a TokenFactor of the load X and load Y, which prevents the folding.
57
58There are two ways to fix this:
59
601. The dag combiner can start using alias analysis to realize that y/x
61 don't alias, making the store to X not dependent on the load from Y.
622. The generated isel could be made smarter in the case it can't
63 disambiguate the pointers.
64
65Number 1 is the preferred solution.
66
67This has been "fixed" by a TableGen hack. But that is a short term workaround
68which will be removed once the proper fix is made.
69
70//===---------------------------------------------------------------------===//
71
72On targets with expensive 64-bit multiply, we could LSR this:
73
74for (i = ...; ++i) {
75 x = 1ULL << i;
76
77into:
78 long long tmp = 1;
79 for (i = ...; ++i, tmp+=tmp)
80 x = tmp;
81
82This would be a win on ppc32, but not x86 or ppc64.
83
84//===---------------------------------------------------------------------===//
85
86Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
87
88//===---------------------------------------------------------------------===//
89
90Reassociate should turn: X*X*X*X -> t=(X*X) (t*t) to eliminate a multiply.
91
92//===---------------------------------------------------------------------===//
93
94Interesting? testcase for add/shift/mul reassoc:
95
96int bar(int x, int y) {
97 return x*x*x+y+x*x*x*x*x*y*y*y*y;
98}
99int foo(int z, int n) {
100 return bar(z, n) + bar(2*z, 2*n);
101}
102
103Reassociate should handle the example in GCC PR16157.
104
105//===---------------------------------------------------------------------===//
106
107These two functions should generate the same code on big-endian systems:
108
109int g(int *j,int *l) { return memcmp(j,l,4); }
110int h(int *j, int *l) { return *j - *l; }
111
112this could be done in SelectionDAGISel.cpp, along with other special cases,
113for 1,2,4,8 bytes.
114
115//===---------------------------------------------------------------------===//
116
117It would be nice to revert this patch:
118http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
119
120And teach the dag combiner enough to simplify the code expanded before
121legalize. It seems plausible that this knowledge would let it simplify other
122stuff too.
123
124//===---------------------------------------------------------------------===//
125
126For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
127to the type size. It works but can be overly conservative as the alignment of
128specific vector types are target dependent.
129
130//===---------------------------------------------------------------------===//
131
132We should add 'unaligned load/store' nodes, and produce them from code like
133this:
134
135v4sf example(float *P) {
136 return (v4sf){P[0], P[1], P[2], P[3] };
137}
138
139//===---------------------------------------------------------------------===//
140
Dan Gohmanf17a25c2007-07-18 16:29:46 +0000141Add support for conditional increments, and other related patterns. Instead
142of:
143
144 movl 136(%esp), %eax
145 cmpl $0, %eax
146 je LBB16_2 #cond_next
147LBB16_1: #cond_true
148 incl _foo
149LBB16_2: #cond_next
150
151emit:
152 movl _foo, %eax
153 cmpl $1, %edi
154 sbbl $-1, %eax
155 movl %eax, _foo
156
157//===---------------------------------------------------------------------===//
158
159Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
160
161Expand these to calls of sin/cos and stores:
162 double sincos(double x, double *sin, double *cos);
163 float sincosf(float x, float *sin, float *cos);
164 long double sincosl(long double x, long double *sin, long double *cos);
165
166Doing so could allow SROA of the destination pointers. See also:
167http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
168
169//===---------------------------------------------------------------------===//
170
171Scalar Repl cannot currently promote this testcase to 'ret long cst':
172
173 %struct.X = type { i32, i32 }
174 %struct.Y = type { %struct.X }
175
176define i64 @bar() {
177 %retval = alloca %struct.Y, align 8
178 %tmp12 = getelementptr %struct.Y* %retval, i32 0, i32 0, i32 0
179 store i32 0, i32* %tmp12
180 %tmp15 = getelementptr %struct.Y* %retval, i32 0, i32 0, i32 1
181 store i32 1, i32* %tmp15
182 %retval.upgrd.1 = bitcast %struct.Y* %retval to i64*
183 %retval.upgrd.2 = load i64* %retval.upgrd.1
184 ret i64 %retval.upgrd.2
185}
186
187it should be extended to do so.
188
189//===---------------------------------------------------------------------===//
190
191-scalarrepl should promote this to be a vector scalar.
192
193 %struct..0anon = type { <4 x float> }
194
195define void @test1(<4 x float> %V, float* %P) {
196 %u = alloca %struct..0anon, align 16
197 %tmp = getelementptr %struct..0anon* %u, i32 0, i32 0
198 store <4 x float> %V, <4 x float>* %tmp
199 %tmp1 = bitcast %struct..0anon* %u to [4 x float]*
200 %tmp.upgrd.1 = getelementptr [4 x float]* %tmp1, i32 0, i32 1
201 %tmp.upgrd.2 = load float* %tmp.upgrd.1
202 %tmp3 = mul float %tmp.upgrd.2, 2.000000e+00
203 store float %tmp3, float* %P
204 ret void
205}
206
207//===---------------------------------------------------------------------===//
208
209Turn this into a single byte store with no load (the other 3 bytes are
210unmodified):
211
212void %test(uint* %P) {
213 %tmp = load uint* %P
214 %tmp14 = or uint %tmp, 3305111552
215 %tmp15 = and uint %tmp14, 3321888767
216 store uint %tmp15, uint* %P
217 ret void
218}
219
220//===---------------------------------------------------------------------===//
221
222dag/inst combine "clz(x)>>5 -> x==0" for 32-bit x.
223
224Compile:
225
226int bar(int x)
227{
228 int t = __builtin_clz(x);
229 return -(t>>5);
230}
231
232to:
233
234_bar: addic r3,r3,-1
235 subfe r3,r3,r3
236 blr
237
238//===---------------------------------------------------------------------===//
239
240Legalize should lower ctlz like this:
241 ctlz(x) = popcnt((x-1) & ~x)
242
243on targets that have popcnt but not ctlz. itanium, what else?
244
245//===---------------------------------------------------------------------===//
246
247quantum_sigma_x in 462.libquantum contains the following loop:
248
249 for(i=0; i<reg->size; i++)
250 {
251 /* Flip the target bit of each basis state */
252 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
253 }
254
255Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
256so cool to turn it into something like:
257
258 long long Res = ((MAX_UNSIGNED) 1 << target);
259 if (target < 32) {
260 for(i=0; i<reg->size; i++)
261 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
262 } else {
263 for(i=0; i<reg->size; i++)
264 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
265 }
266
267... which would only do one 32-bit XOR per loop iteration instead of two.
268
269It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
270alas...
271
272//===---------------------------------------------------------------------===//
273
274This isn't recognized as bswap by instcombine:
275
276unsigned int swap_32(unsigned int v) {
277 v = ((v & 0x00ff00ffU) << 8) | ((v & 0xff00ff00U) >> 8);
278 v = ((v & 0x0000ffffU) << 16) | ((v & 0xffff0000U) >> 16);
279 return v;
280}
281
282Nor is this (yes, it really is bswap):
283
284unsigned long reverse(unsigned v) {
285 unsigned t;
286 t = v ^ ((v << 16) | (v >> 16));
287 t &= ~0xff0000;
288 v = (v << 24) | (v >> 8);
289 return v ^ (t >> 8);
290}
291
292//===---------------------------------------------------------------------===//
293
294These should turn into single 16-bit (unaligned?) loads on little/big endian
295processors.
296
297unsigned short read_16_le(const unsigned char *adr) {
298 return adr[0] | (adr[1] << 8);
299}
300unsigned short read_16_be(const unsigned char *adr) {
301 return (adr[0] << 8) | adr[1];
302}
303
304//===---------------------------------------------------------------------===//
305
306-instcombine should handle this transform:
307 icmp pred (sdiv X / C1 ), C2
308when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
309
310Currently InstCombine avoids this transform but will do it when the signs of
311the operands and the sign of the divide match. See the FIXME in
312InstructionCombining.cpp in the visitSetCondInst method after the switch case
313for Instruction::UDiv (around line 4447) for more details.
314
315The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
316this construct.
317
318//===---------------------------------------------------------------------===//
319
Dan Gohmanf17a25c2007-07-18 16:29:46 +0000320viterbi speeds up *significantly* if the various "history" related copy loops
321are turned into memcpy calls at the source level. We need a "loops to memcpy"
322pass.
323
324//===---------------------------------------------------------------------===//
325
326Consider:
327
328typedef unsigned U32;
329typedef unsigned long long U64;
330int test (U32 *inst, U64 *regs) {
331 U64 effective_addr2;
332 U32 temp = *inst;
333 int r1 = (temp >> 20) & 0xf;
334 int b2 = (temp >> 16) & 0xf;
335 effective_addr2 = temp & 0xfff;
336 if (b2) effective_addr2 += regs[b2];
337 b2 = (temp >> 12) & 0xf;
338 if (b2) effective_addr2 += regs[b2];
339 effective_addr2 &= regs[4];
340 if ((effective_addr2 & 3) == 0)
341 return 1;
342 return 0;
343}
344
345Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
346we don't eliminate the computation of the top half of effective_addr2 because
347we don't have whole-function selection dags. On x86, this means we use one
348extra register for the function when effective_addr2 is declared as U64 than
349when it is declared U32.
350
351//===---------------------------------------------------------------------===//
352
353Promote for i32 bswap can use i64 bswap + shr. Useful on targets with 64-bit
354regs and bswap, like itanium.
355
356//===---------------------------------------------------------------------===//
357
358LSR should know what GPR types a target has. This code:
359
360volatile short X, Y; // globals
361
362void foo(int N) {
363 int i;
364 for (i = 0; i < N; i++) { X = i; Y = i*4; }
365}
366
367produces two identical IV's (after promotion) on PPC/ARM:
368
369LBB1_1: @bb.preheader
370 mov r3, #0
371 mov r2, r3
372 mov r1, r3
373LBB1_2: @bb
374 ldr r12, LCPI1_0
375 ldr r12, [r12]
376 strh r2, [r12]
377 ldr r12, LCPI1_1
378 ldr r12, [r12]
379 strh r3, [r12]
380 add r1, r1, #1 <- [0,+,1]
381 add r3, r3, #4
382 add r2, r2, #1 <- [0,+,1]
383 cmp r1, r0
384 bne LBB1_2 @bb
385
386
387//===---------------------------------------------------------------------===//
388
389Tail call elim should be more aggressive, checking to see if the call is
390followed by an uncond branch to an exit block.
391
392; This testcase is due to tail-duplication not wanting to copy the return
393; instruction into the terminating blocks because there was other code
394; optimized out of the function after the taildup happened.
Chris Lattnerd78823e2008-02-18 18:46:39 +0000395; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
Dan Gohmanf17a25c2007-07-18 16:29:46 +0000396
Chris Lattnerd78823e2008-02-18 18:46:39 +0000397define i32 @t4(i32 %a) {
Dan Gohmanf17a25c2007-07-18 16:29:46 +0000398entry:
Chris Lattnerd78823e2008-02-18 18:46:39 +0000399 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
400 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
401 br i1 %tmp.2, label %then.0, label %else.0
Dan Gohmanf17a25c2007-07-18 16:29:46 +0000402
Chris Lattnerd78823e2008-02-18 18:46:39 +0000403then.0: ; preds = %entry
404 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
405 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
406 br label %return
Dan Gohmanf17a25c2007-07-18 16:29:46 +0000407
Chris Lattnerd78823e2008-02-18 18:46:39 +0000408else.0: ; preds = %entry
409 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
410 br i1 %tmp.7, label %then.1, label %return
Dan Gohmanf17a25c2007-07-18 16:29:46 +0000411
Chris Lattnerd78823e2008-02-18 18:46:39 +0000412then.1: ; preds = %else.0
413 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
414 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
415 br label %return
Dan Gohmanf17a25c2007-07-18 16:29:46 +0000416
Chris Lattnerd78823e2008-02-18 18:46:39 +0000417return: ; preds = %then.1, %else.0, %then.0
418 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
Dan Gohmanf17a25c2007-07-18 16:29:46 +0000419 [ %tmp.9, %then.1 ]
Chris Lattnerd78823e2008-02-18 18:46:39 +0000420 ret i32 %result.0
Dan Gohmanf17a25c2007-07-18 16:29:46 +0000421}
422
423//===---------------------------------------------------------------------===//
424
Chris Lattner00159fc2007-10-03 06:10:59 +0000425Tail recursion elimination is not transforming this function, because it is
426returning n, which fails the isDynamicConstant check in the accumulator
427recursion checks.
428
429long long fib(const long long n) {
430 switch(n) {
431 case 0:
432 case 1:
433 return n;
434 default:
435 return fib(n-1) + fib(n-2);
436 }
437}
438
439//===---------------------------------------------------------------------===//
440
Dan Gohmanf17a25c2007-07-18 16:29:46 +0000441Argument promotion should promote arguments for recursive functions, like
442this:
443
Chris Lattnerd78823e2008-02-18 18:46:39 +0000444; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
Dan Gohmanf17a25c2007-07-18 16:29:46 +0000445
Chris Lattnerd78823e2008-02-18 18:46:39 +0000446define internal i32 @foo(i32* %x) {
Dan Gohmanf17a25c2007-07-18 16:29:46 +0000447entry:
Chris Lattnerd78823e2008-02-18 18:46:39 +0000448 %tmp = load i32* %x ; <i32> [#uses=0]
449 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
450 ret i32 %tmp.foo
Dan Gohmanf17a25c2007-07-18 16:29:46 +0000451}
452
Chris Lattnerd78823e2008-02-18 18:46:39 +0000453define i32 @bar(i32* %x) {
Dan Gohmanf17a25c2007-07-18 16:29:46 +0000454entry:
Chris Lattnerd78823e2008-02-18 18:46:39 +0000455 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
456 ret i32 %tmp3
Dan Gohmanf17a25c2007-07-18 16:29:46 +0000457}
458
Chris Lattner421a7332007-12-05 23:05:06 +0000459//===---------------------------------------------------------------------===//
Chris Lattner072ab752007-12-28 04:42:05 +0000460
461"basicaa" should know how to look through "or" instructions that act like add
462instructions. For example in this code, the x*4+1 is turned into x*4 | 1, and
463basicaa can't analyze the array subscript, leading to duplicated loads in the
464generated code:
465
466void test(int X, int Y, int a[]) {
467int i;
468 for (i=2; i<1000; i+=4) {
469 a[i+0] = a[i-1+0]*a[i-2+0];
470 a[i+1] = a[i-1+1]*a[i-2+1];
471 a[i+2] = a[i-1+2]*a[i-2+2];
472 a[i+3] = a[i-1+3]*a[i-2+3];
473 }
474}
475
Chris Lattnerfe7fe912007-12-28 22:30:05 +0000476//===---------------------------------------------------------------------===//
Chris Lattner072ab752007-12-28 04:42:05 +0000477
Chris Lattnerfe7fe912007-12-28 22:30:05 +0000478We should investigate an instruction sinking pass. Consider this silly
479example in pic mode:
480
481#include <assert.h>
482void foo(int x) {
483 assert(x);
484 //...
485}
486
487we compile this to:
488_foo:
489 subl $28, %esp
490 call "L1$pb"
491"L1$pb":
492 popl %eax
493 cmpl $0, 32(%esp)
494 je LBB1_2 # cond_true
495LBB1_1: # return
496 # ...
497 addl $28, %esp
498 ret
499LBB1_2: # cond_true
500...
501
502The PIC base computation (call+popl) is only used on one path through the
503code, but is currently always computed in the entry block. It would be
504better to sink the picbase computation down into the block for the
505assertion, as it is the only one that uses it. This happens for a lot of
506code with early outs.
507
Chris Lattnerbe9fe9d2007-12-29 01:05:01 +0000508Another example is loads of arguments, which are usually emitted into the
509entry block on targets like x86. If not used in all paths through a
510function, they should be sunk into the ones that do.
511
Chris Lattnerfe7fe912007-12-28 22:30:05 +0000512In this case, whole-function-isel would also handle this.
Chris Lattner072ab752007-12-28 04:42:05 +0000513
514//===---------------------------------------------------------------------===//
Chris Lattner8551f192008-01-07 21:38:14 +0000515
516Investigate lowering of sparse switch statements into perfect hash tables:
517http://burtleburtle.net/bob/hash/perfect.html
518
519//===---------------------------------------------------------------------===//
Chris Lattnerdc089f02008-01-09 00:17:57 +0000520
521We should turn things like "load+fabs+store" and "load+fneg+store" into the
522corresponding integer operations. On a yonah, this loop:
523
524double a[256];
Chris Lattnerd78823e2008-02-18 18:46:39 +0000525void foo() {
526 int i, b;
527 for (b = 0; b < 10000000; b++)
528 for (i = 0; i < 256; i++)
529 a[i] = -a[i];
530}
Chris Lattnerdc089f02008-01-09 00:17:57 +0000531
532is twice as slow as this loop:
533
534long long a[256];
Chris Lattnerd78823e2008-02-18 18:46:39 +0000535void foo() {
536 int i, b;
537 for (b = 0; b < 10000000; b++)
538 for (i = 0; i < 256; i++)
539 a[i] ^= (1ULL << 63);
540}
Chris Lattnerdc089f02008-01-09 00:17:57 +0000541
542and I suspect other processors are similar. On X86 in particular this is a
543big win because doing this with integers allows the use of read/modify/write
544instructions.
545
546//===---------------------------------------------------------------------===//
Chris Lattnera909d312008-01-10 18:25:41 +0000547
548DAG Combiner should try to combine small loads into larger loads when
549profitable. For example, we compile this C++ example:
550
551struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
552extern THotKey m_HotKey;
553THotKey GetHotKey () { return m_HotKey; }
554
555into (-O3 -fno-exceptions -static -fomit-frame-pointer):
556
557__Z9GetHotKeyv:
558 pushl %esi
559 movl 8(%esp), %eax
560 movb _m_HotKey+3, %cl
561 movb _m_HotKey+4, %dl
562 movb _m_HotKey+2, %ch
563 movw _m_HotKey, %si
564 movw %si, (%eax)
565 movb %ch, 2(%eax)
566 movb %cl, 3(%eax)
567 movb %dl, 4(%eax)
568 popl %esi
569 ret $4
570
571GCC produces:
572
573__Z9GetHotKeyv:
574 movl _m_HotKey, %edx
575 movl 4(%esp), %eax
576 movl %edx, (%eax)
577 movzwl _m_HotKey+4, %edx
578 movw %dx, 4(%eax)
579 ret $4
580
581The LLVM IR contains the needed alignment info, so we should be able to
582merge the loads and stores into 4-byte loads:
583
584 %struct.THotKey = type { i16, i8, i8, i8 }
585define void @_Z9GetHotKeyv(%struct.THotKey* sret %agg.result) nounwind {
586...
587 %tmp2 = load i16* getelementptr (@m_HotKey, i32 0, i32 0), align 8
588 %tmp5 = load i8* getelementptr (@m_HotKey, i32 0, i32 1), align 2
589 %tmp8 = load i8* getelementptr (@m_HotKey, i32 0, i32 2), align 1
590 %tmp11 = load i8* getelementptr (@m_HotKey, i32 0, i32 3), align 2
591
592Alternatively, we should use a small amount of base-offset alias analysis
593to make it so the scheduler doesn't need to hold all the loads in regs at
594once.
595
596//===---------------------------------------------------------------------===//
Chris Lattner71538b52008-01-11 06:17:47 +0000597
598We should extend parameter attributes to capture more information about
599pointer parameters for alias analysis. Some ideas:
600
6011. Add a "nocapture" attribute, which indicates that the callee does not store
602 the address of the parameter into a global or any other memory location
603 visible to the callee. This can be used to make basicaa and other analyses
604 more powerful. It is true for things like memcpy, strcat, and many other
605 things, including structs passed by value, most C++ references, etc.
6062. Generalize readonly to be set on parameters. This is important mod/ref
607 info for the function, which is important for basicaa and others. It can
608 also be used by the inliner to avoid inserting a memcpy for byval
609 arguments when the function is inlined.
610
611These functions can be inferred by various analysis passes such as the
Chris Lattnerde99a232008-01-12 18:58:46 +0000612globalsmodrefaa pass. Note that getting #2 right is actually really tricky.
613Consider this code:
614
615struct S; S G;
616void caller(S byvalarg) { G.field = 1; ... }
617void callee() { caller(G); }
618
619The fact that the caller does not modify byval arg is not enough, we need
620to know that it doesn't modify G either. This is very tricky.
Chris Lattner71538b52008-01-11 06:17:47 +0000621
622//===---------------------------------------------------------------------===//
Nate Begemana3c7ba32008-02-18 18:39:23 +0000623
624We should add an FRINT node to the DAG to model targets that have legal
625implementations of ceil/floor/rint.
Chris Lattnera672d3d2008-02-28 05:34:27 +0000626
627//===---------------------------------------------------------------------===//
628
Chris Lattner61075f32008-02-28 17:21:27 +0000629This GCC bug: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34043
630contains a testcase that compiles down to:
631
632 %struct.XMM128 = type { <4 x float> }
633..
634 %src = alloca %struct.XMM128
635..
636 %tmp6263 = bitcast %struct.XMM128* %src to <2 x i64>*
637 %tmp65 = getelementptr %struct.XMM128* %src, i32 0, i32 0
638 store <2 x i64> %tmp5899, <2 x i64>* %tmp6263, align 16
639 %tmp66 = load <4 x float>* %tmp65, align 16
640 %tmp71 = add <4 x float> %tmp66, %tmp66
641
642If the mid-level optimizer turned the bitcast of pointer + store of tmp5899
643into a bitcast of the vector value and a store to the pointer, then the
644store->load could be easily removed.
645
646//===---------------------------------------------------------------------===//
647
Chris Lattnera672d3d2008-02-28 05:34:27 +0000648Consider:
649
650int test() {
651 long long input[8] = {1,1,1,1,1,1,1,1};
652 foo(input);
653}
654
655We currently compile this into a memcpy from a global array since the
656initializer is fairly large and not memset'able. This is good, but the memcpy
657gets lowered to load/stores in the code generator. This is also ok, except
658that the codegen lowering for memcpy doesn't handle the case when the source
659is a constant global. This gives us atrocious code like this:
660
661 call "L1$pb"
662"L1$pb":
663 popl %eax
664 movl _C.0.1444-"L1$pb"+32(%eax), %ecx
665 movl %ecx, 40(%esp)
666 movl _C.0.1444-"L1$pb"+20(%eax), %ecx
667 movl %ecx, 28(%esp)
668 movl _C.0.1444-"L1$pb"+36(%eax), %ecx
669 movl %ecx, 44(%esp)
670 movl _C.0.1444-"L1$pb"+44(%eax), %ecx
671 movl %ecx, 52(%esp)
672 movl _C.0.1444-"L1$pb"+40(%eax), %ecx
673 movl %ecx, 48(%esp)
674 movl _C.0.1444-"L1$pb"+12(%eax), %ecx
675 movl %ecx, 20(%esp)
676 movl _C.0.1444-"L1$pb"+4(%eax), %ecx
677...
678
679instead of:
680 movl $1, 16(%esp)
681 movl $0, 20(%esp)
682 movl $1, 24(%esp)
683 movl $0, 28(%esp)
684 movl $1, 32(%esp)
685 movl $0, 36(%esp)
686 ...
687
688//===---------------------------------------------------------------------===//
Chris Lattnera709d332008-03-02 02:51:40 +0000689
690http://llvm.org/PR717:
691
692The following code should compile into "ret int undef". Instead, LLVM
693produces "ret int 0":
694
695int f() {
696 int x = 4;
697 int y;
698 if (x == 3) y = 0;
699 return y;
700}
701
702//===---------------------------------------------------------------------===//
Chris Lattnere9c68442008-03-02 19:29:42 +0000703
704The loop unroller should partially unroll loops (instead of peeling them)
705when code growth isn't too bad and when an unroll count allows simplification
706of some code within the loop. One trivial example is:
707
708#include <stdio.h>
709int main() {
710 int nRet = 17;
711 int nLoop;
712 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
713 if ( nLoop & 1 )
714 nRet += 2;
715 else
716 nRet -= 1;
717 }
718 return nRet;
719}
720
721Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
722reduction in code size. The resultant code would then also be suitable for
723exit value computation.
724
725//===---------------------------------------------------------------------===//
Chris Lattner962a7d52008-03-17 01:47:51 +0000726
727We miss a bunch of rotate opportunities on various targets, including ppc, x86,
728etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
729matching code in dag combine doesn't look through truncates aggressively
730enough. Here are some testcases reduces from GCC PR17886:
731
732unsigned long long f(unsigned long long x, int y) {
733 return (x << y) | (x >> 64-y);
734}
735unsigned f2(unsigned x, int y){
736 return (x << y) | (x >> 32-y);
737}
738unsigned long long f3(unsigned long long x){
739 int y = 9;
740 return (x << y) | (x >> 64-y);
741}
742unsigned f4(unsigned x){
743 int y = 10;
744 return (x << y) | (x >> 32-y);
745}
746unsigned long long f5(unsigned long long x, unsigned long long y) {
747 return (x << 8) | ((y >> 48) & 0xffull);
748}
749unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
750 switch(z) {
751 case 1:
752 return (x << 8) | ((y >> 48) & 0xffull);
753 case 2:
754 return (x << 16) | ((y >> 40) & 0xffffull);
755 case 3:
756 return (x << 24) | ((y >> 32) & 0xffffffull);
757 case 4:
758 return (x << 32) | ((y >> 24) & 0xffffffffull);
759 default:
760 return (x << 40) | ((y >> 16) & 0xffffffffffull);
761 }
762}
763
764On X86-64, we only handle f3/f4 right. On x86-32, several of these
765generate truly horrible code, instead of using shld and friends. On
766ARM, we end up with calls to L___lshrdi3/L___ashldi3 in f, which is
767badness. PPC64 misses f, f5 and f6. CellSPU aborts in isel.
768
769//===---------------------------------------------------------------------===//
Chris Lattner38c4a152008-03-20 04:46:13 +0000770
771We do a number of simplifications in simplify libcalls to strength reduce
772standard library functions, but we don't currently merge them together. For
773example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
774be done safely if "b" isn't modified between the strlen and memcpy of course.
775
776//===---------------------------------------------------------------------===//
777
Chris Lattner045fd222008-05-17 15:37:38 +0000778We should be able to evaluate this loop:
779
780int test(int x_offs) {
781 while (x_offs > 4)
782 x_offs -= 4;
783 return x_offs;
784}
785
786//===---------------------------------------------------------------------===//
Chris Lattner40a68642008-07-14 00:19:59 +0000787
788Reassociate should turn things like:
789
790int factorial(int X) {
791 return X*X*X*X*X*X*X*X;
792}
793
794into llvm.powi calls, allowing the code generator to produce balanced
795multiplication trees.
796
797//===---------------------------------------------------------------------===//
798