blob: 646961efe1877d86cf403e768ca8d860468b20e2 [file] [log] [blame]
Chris Lattner086c0142006-02-03 06:21:43 +00001Target Independent Opportunities:
2
Chris Lattnerf308ea02006-09-28 06:01:17 +00003//===---------------------------------------------------------------------===//
4
Chris Lattner4e4e4612008-01-07 07:46:23 +00005We should make the various target's "IMPLICIT_DEF" instructions be a single
6target-independent opcode like TargetInstrInfo::INLINEASM. This would allow
7us to eliminate the TargetInstrDesc::isImplicitDef() method, and would allow
8us to avoid having to define this for every target for every register class.
9
10//===---------------------------------------------------------------------===//
11
Chris Lattner9b62b452006-11-14 01:57:53 +000012With the recent changes to make the implicit def/use set explicit in
13machineinstrs, we should change the target descriptions for 'call' instructions
14so that the .td files don't list all the call-clobbered registers as implicit
15defs. Instead, these should be added by the code generator (e.g. on the dag).
16
17This has a number of uses:
18
191. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
20 for their different impdef sets.
212. Targets with multiple calling convs (e.g. x86) which have different clobber
22 sets don't need copies of call instructions.
233. 'Interprocedural register allocation' can be done to reduce the clobber sets
24 of calls.
25
26//===---------------------------------------------------------------------===//
27
Nate Begeman81e80972006-03-17 01:40:33 +000028Make the PPC branch selector target independant
29
30//===---------------------------------------------------------------------===//
Chris Lattner086c0142006-02-03 06:21:43 +000031
32Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
33precision don't matter (ffastmath). Misc/mandel will like this. :)
34
Chris Lattner086c0142006-02-03 06:21:43 +000035//===---------------------------------------------------------------------===//
36
37Solve this DAG isel folding deficiency:
38
39int X, Y;
40
41void fn1(void)
42{
43 X = X | (Y << 3);
44}
45
46compiles to
47
48fn1:
49 movl Y, %eax
50 shll $3, %eax
51 orl X, %eax
52 movl %eax, X
53 ret
54
55The problem is the store's chain operand is not the load X but rather
56a TokenFactor of the load X and load Y, which prevents the folding.
57
58There are two ways to fix this:
59
601. The dag combiner can start using alias analysis to realize that y/x
61 don't alias, making the store to X not dependent on the load from Y.
622. The generated isel could be made smarter in the case it can't
63 disambiguate the pointers.
64
65Number 1 is the preferred solution.
66
Evan Chenge617b082006-03-13 23:19:10 +000067This has been "fixed" by a TableGen hack. But that is a short term workaround
68which will be removed once the proper fix is made.
69
Chris Lattner086c0142006-02-03 06:21:43 +000070//===---------------------------------------------------------------------===//
71
Chris Lattnerb27b69f2006-03-04 01:19:34 +000072On targets with expensive 64-bit multiply, we could LSR this:
73
74for (i = ...; ++i) {
75 x = 1ULL << i;
76
77into:
78 long long tmp = 1;
79 for (i = ...; ++i, tmp+=tmp)
80 x = tmp;
81
82This would be a win on ppc32, but not x86 or ppc64.
83
Chris Lattnerad019932006-03-04 08:44:51 +000084//===---------------------------------------------------------------------===//
Chris Lattner5b0fe7d2006-03-05 20:00:08 +000085
86Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
87
88//===---------------------------------------------------------------------===//
Chris Lattner549f27d22006-03-07 02:46:26 +000089
Chris Lattnerc20995e2006-03-11 20:17:08 +000090Reassociate should turn: X*X*X*X -> t=(X*X) (t*t) to eliminate a multiply.
91
92//===---------------------------------------------------------------------===//
93
Chris Lattner74cfb7d2006-03-11 20:20:40 +000094Interesting? testcase for add/shift/mul reassoc:
95
96int bar(int x, int y) {
97 return x*x*x+y+x*x*x*x*x*y*y*y*y;
98}
99int foo(int z, int n) {
100 return bar(z, n) + bar(2*z, 2*n);
101}
102
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000103Reassociate should handle the example in GCC PR16157.
104
Chris Lattner74cfb7d2006-03-11 20:20:40 +0000105//===---------------------------------------------------------------------===//
106
Chris Lattner82c78b22006-03-09 20:13:21 +0000107These two functions should generate the same code on big-endian systems:
108
109int g(int *j,int *l) { return memcmp(j,l,4); }
110int h(int *j, int *l) { return *j - *l; }
111
112this could be done in SelectionDAGISel.cpp, along with other special cases,
113for 1,2,4,8 bytes.
114
115//===---------------------------------------------------------------------===//
116
Chris Lattnerc04b4232006-03-22 07:33:46 +0000117It would be nice to revert this patch:
118http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
119
120And teach the dag combiner enough to simplify the code expanded before
121legalize. It seems plausible that this knowledge would let it simplify other
122stuff too.
123
Chris Lattnere6cd96d2006-03-24 19:59:17 +0000124//===---------------------------------------------------------------------===//
125
Reid Spencerac9dcb92007-02-15 03:39:18 +0000126For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
Evan Cheng67d3d4c2006-03-31 22:35:14 +0000127to the type size. It works but can be overly conservative as the alignment of
Reid Spencerac9dcb92007-02-15 03:39:18 +0000128specific vector types are target dependent.
Chris Lattnereaa7c062006-04-01 04:08:29 +0000129
130//===---------------------------------------------------------------------===//
131
132We should add 'unaligned load/store' nodes, and produce them from code like
133this:
134
135v4sf example(float *P) {
136 return (v4sf){P[0], P[1], P[2], P[3] };
137}
138
139//===---------------------------------------------------------------------===//
140
Chris Lattner16abfdf2006-05-18 18:26:13 +0000141Add support for conditional increments, and other related patterns. Instead
142of:
143
144 movl 136(%esp), %eax
145 cmpl $0, %eax
146 je LBB16_2 #cond_next
147LBB16_1: #cond_true
148 incl _foo
149LBB16_2: #cond_next
150
151emit:
152 movl _foo, %eax
153 cmpl $1, %edi
154 sbbl $-1, %eax
155 movl %eax, _foo
156
157//===---------------------------------------------------------------------===//
Chris Lattner870cf1b2006-05-19 20:45:08 +0000158
159Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
160
161Expand these to calls of sin/cos and stores:
162 double sincos(double x, double *sin, double *cos);
163 float sincosf(float x, float *sin, float *cos);
164 long double sincosl(long double x, long double *sin, long double *cos);
165
166Doing so could allow SROA of the destination pointers. See also:
167http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
168
169//===---------------------------------------------------------------------===//
Chris Lattnerf00f68a2006-05-19 21:01:38 +0000170
171Scalar Repl cannot currently promote this testcase to 'ret long cst':
172
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000173 %struct.X = type { i32, i32 }
Chris Lattnerf00f68a2006-05-19 21:01:38 +0000174 %struct.Y = type { %struct.X }
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000175
176define i64 @bar() {
177 %retval = alloca %struct.Y, align 8
178 %tmp12 = getelementptr %struct.Y* %retval, i32 0, i32 0, i32 0
179 store i32 0, i32* %tmp12
180 %tmp15 = getelementptr %struct.Y* %retval, i32 0, i32 0, i32 1
181 store i32 1, i32* %tmp15
182 %retval.upgrd.1 = bitcast %struct.Y* %retval to i64*
183 %retval.upgrd.2 = load i64* %retval.upgrd.1
184 ret i64 %retval.upgrd.2
Chris Lattnerf00f68a2006-05-19 21:01:38 +0000185}
186
187it should be extended to do so.
188
189//===---------------------------------------------------------------------===//
Chris Lattnere8263e62006-05-21 03:57:07 +0000190
Chris Lattnera5546fb2006-12-11 00:44:03 +0000191-scalarrepl should promote this to be a vector scalar.
192
193 %struct..0anon = type { <4 x float> }
194
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000195define void @test1(<4 x float> %V, float* %P) {
Chris Lattnera5546fb2006-12-11 00:44:03 +0000196 %u = alloca %struct..0anon, align 16
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000197 %tmp = getelementptr %struct..0anon* %u, i32 0, i32 0
Chris Lattnera5546fb2006-12-11 00:44:03 +0000198 store <4 x float> %V, <4 x float>* %tmp
199 %tmp1 = bitcast %struct..0anon* %u to [4 x float]*
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000200 %tmp.upgrd.1 = getelementptr [4 x float]* %tmp1, i32 0, i32 1
201 %tmp.upgrd.2 = load float* %tmp.upgrd.1
202 %tmp3 = mul float %tmp.upgrd.2, 2.000000e+00
Chris Lattnera5546fb2006-12-11 00:44:03 +0000203 store float %tmp3, float* %P
204 ret void
205}
206
207//===---------------------------------------------------------------------===//
208
Chris Lattnere8263e62006-05-21 03:57:07 +0000209Turn this into a single byte store with no load (the other 3 bytes are
210unmodified):
211
212void %test(uint* %P) {
213 %tmp = load uint* %P
214 %tmp14 = or uint %tmp, 3305111552
215 %tmp15 = and uint %tmp14, 3321888767
216 store uint %tmp15, uint* %P
217 ret void
218}
219
Chris Lattner9e18ef52006-05-30 21:29:15 +0000220//===---------------------------------------------------------------------===//
221
222dag/inst combine "clz(x)>>5 -> x==0" for 32-bit x.
223
224Compile:
225
226int bar(int x)
227{
228 int t = __builtin_clz(x);
229 return -(t>>5);
230}
231
232to:
233
234_bar: addic r3,r3,-1
235 subfe r3,r3,r3
236 blr
237
Chris Lattnercbce2f62006-09-15 20:31:36 +0000238//===---------------------------------------------------------------------===//
239
240Legalize should lower ctlz like this:
241 ctlz(x) = popcnt((x-1) & ~x)
242
243on targets that have popcnt but not ctlz. itanium, what else?
Chris Lattner9e18ef52006-05-30 21:29:15 +0000244
Chris Lattner7ed96ab2006-09-16 23:57:51 +0000245//===---------------------------------------------------------------------===//
246
247quantum_sigma_x in 462.libquantum contains the following loop:
248
249 for(i=0; i<reg->size; i++)
250 {
251 /* Flip the target bit of each basis state */
252 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
253 }
254
255Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
256so cool to turn it into something like:
257
Chris Lattnerb33a42a2006-09-18 04:54:35 +0000258 long long Res = ((MAX_UNSIGNED) 1 << target);
Chris Lattner7ed96ab2006-09-16 23:57:51 +0000259 if (target < 32) {
260 for(i=0; i<reg->size; i++)
Chris Lattnerb33a42a2006-09-18 04:54:35 +0000261 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
Chris Lattner7ed96ab2006-09-16 23:57:51 +0000262 } else {
263 for(i=0; i<reg->size; i++)
Chris Lattnerb33a42a2006-09-18 04:54:35 +0000264 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
Chris Lattner7ed96ab2006-09-16 23:57:51 +0000265 }
266
267... which would only do one 32-bit XOR per loop iteration instead of two.
268
269It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
270alas...
271
272//===---------------------------------------------------------------------===//
Chris Lattnerfb981f32006-09-25 17:12:14 +0000273
Chris Lattnerb1ac7692008-10-05 02:16:12 +0000274This isn't recognized as bswap by instcombine (yes, it really is bswap):
Chris Lattnerf9bae432006-12-08 02:01:32 +0000275
276unsigned long reverse(unsigned v) {
277 unsigned t;
278 t = v ^ ((v << 16) | (v >> 16));
279 t &= ~0xff0000;
280 v = (v << 24) | (v >> 8);
281 return v ^ (t >> 8);
282}
283
Chris Lattnerfb981f32006-09-25 17:12:14 +0000284//===---------------------------------------------------------------------===//
285
Chris Lattnerf4fee2a2008-10-15 16:02:15 +0000286These idioms should be recognized as popcount (see PR1488):
287
288unsigned countbits_slow(unsigned v) {
289 unsigned c;
290 for (c = 0; v; v >>= 1)
291 c += v & 1;
292 return c;
293}
294unsigned countbits_fast(unsigned v){
295 unsigned c;
296 for (c = 0; v; c++)
297 v &= v - 1; // clear the least significant bit set
298 return c;
299}
300
301BITBOARD = unsigned long long
302int PopCnt(register BITBOARD a) {
303 register int c=0;
304 while(a) {
305 c++;
306 a &= a - 1;
307 }
308 return c;
309}
310unsigned int popcount(unsigned int input) {
311 unsigned int count = 0;
312 for (unsigned int i = 0; i < 4 * 8; i++)
313 count += (input >> i) & i;
314 return count;
315}
316
317//===---------------------------------------------------------------------===//
318
Chris Lattnerfb981f32006-09-25 17:12:14 +0000319These should turn into single 16-bit (unaligned?) loads on little/big endian
320processors.
321
322unsigned short read_16_le(const unsigned char *adr) {
323 return adr[0] | (adr[1] << 8);
324}
325unsigned short read_16_be(const unsigned char *adr) {
326 return (adr[0] << 8) | adr[1];
327}
328
329//===---------------------------------------------------------------------===//
Chris Lattnercf103912006-10-24 16:12:47 +0000330
Reid Spencer1628cec2006-10-26 06:15:43 +0000331-instcombine should handle this transform:
Reid Spencere4d87aa2006-12-23 06:05:41 +0000332 icmp pred (sdiv X / C1 ), C2
Reid Spencer1628cec2006-10-26 06:15:43 +0000333when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
334
335Currently InstCombine avoids this transform but will do it when the signs of
336the operands and the sign of the divide match. See the FIXME in
337InstructionCombining.cpp in the visitSetCondInst method after the switch case
338for Instruction::UDiv (around line 4447) for more details.
339
340The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
341this construct.
Chris Lattnerd7c628d2006-11-03 22:27:39 +0000342
343//===---------------------------------------------------------------------===//
344
Chris Lattner578d2df2006-11-10 00:23:26 +0000345viterbi speeds up *significantly* if the various "history" related copy loops
346are turned into memcpy calls at the source level. We need a "loops to memcpy"
347pass.
348
349//===---------------------------------------------------------------------===//
Nick Lewyckybf637342006-11-13 00:23:28 +0000350
Chris Lattner03a6d962007-01-16 06:39:48 +0000351Consider:
352
353typedef unsigned U32;
354typedef unsigned long long U64;
355int test (U32 *inst, U64 *regs) {
356 U64 effective_addr2;
357 U32 temp = *inst;
358 int r1 = (temp >> 20) & 0xf;
359 int b2 = (temp >> 16) & 0xf;
360 effective_addr2 = temp & 0xfff;
361 if (b2) effective_addr2 += regs[b2];
362 b2 = (temp >> 12) & 0xf;
363 if (b2) effective_addr2 += regs[b2];
364 effective_addr2 &= regs[4];
365 if ((effective_addr2 & 3) == 0)
366 return 1;
367 return 0;
368}
369
370Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
371we don't eliminate the computation of the top half of effective_addr2 because
372we don't have whole-function selection dags. On x86, this means we use one
373extra register for the function when effective_addr2 is declared as U64 than
374when it is declared U32.
375
376//===---------------------------------------------------------------------===//
377
Chris Lattner36e37d22007-02-13 21:44:43 +0000378Promote for i32 bswap can use i64 bswap + shr. Useful on targets with 64-bit
379regs and bswap, like itanium.
380
381//===---------------------------------------------------------------------===//
Chris Lattner1a77a552007-03-24 06:01:32 +0000382
383LSR should know what GPR types a target has. This code:
384
385volatile short X, Y; // globals
386
387void foo(int N) {
388 int i;
389 for (i = 0; i < N; i++) { X = i; Y = i*4; }
390}
391
392produces two identical IV's (after promotion) on PPC/ARM:
393
394LBB1_1: @bb.preheader
395 mov r3, #0
396 mov r2, r3
397 mov r1, r3
398LBB1_2: @bb
399 ldr r12, LCPI1_0
400 ldr r12, [r12]
401 strh r2, [r12]
402 ldr r12, LCPI1_1
403 ldr r12, [r12]
404 strh r3, [r12]
405 add r1, r1, #1 <- [0,+,1]
406 add r3, r3, #4
407 add r2, r2, #1 <- [0,+,1]
408 cmp r1, r0
409 bne LBB1_2 @bb
410
411
412//===---------------------------------------------------------------------===//
413
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000414Tail call elim should be more aggressive, checking to see if the call is
415followed by an uncond branch to an exit block.
416
417; This testcase is due to tail-duplication not wanting to copy the return
418; instruction into the terminating blocks because there was other code
419; optimized out of the function after the taildup happened.
Chris Lattner7c4e9a42008-02-18 18:46:39 +0000420; RUN: llvm-as < %s | opt -tailcallelim | llvm-dis | not grep call
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000421
Chris Lattner7c4e9a42008-02-18 18:46:39 +0000422define i32 @t4(i32 %a) {
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000423entry:
Chris Lattner7c4e9a42008-02-18 18:46:39 +0000424 %tmp.1 = and i32 %a, 1 ; <i32> [#uses=1]
425 %tmp.2 = icmp ne i32 %tmp.1, 0 ; <i1> [#uses=1]
426 br i1 %tmp.2, label %then.0, label %else.0
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000427
Chris Lattner7c4e9a42008-02-18 18:46:39 +0000428then.0: ; preds = %entry
429 %tmp.5 = add i32 %a, -1 ; <i32> [#uses=1]
430 %tmp.3 = call i32 @t4( i32 %tmp.5 ) ; <i32> [#uses=1]
431 br label %return
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000432
Chris Lattner7c4e9a42008-02-18 18:46:39 +0000433else.0: ; preds = %entry
434 %tmp.7 = icmp ne i32 %a, 0 ; <i1> [#uses=1]
435 br i1 %tmp.7, label %then.1, label %return
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000436
Chris Lattner7c4e9a42008-02-18 18:46:39 +0000437then.1: ; preds = %else.0
438 %tmp.11 = add i32 %a, -2 ; <i32> [#uses=1]
439 %tmp.9 = call i32 @t4( i32 %tmp.11 ) ; <i32> [#uses=1]
440 br label %return
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000441
Chris Lattner7c4e9a42008-02-18 18:46:39 +0000442return: ; preds = %then.1, %else.0, %then.0
443 %result.0 = phi i32 [ 0, %else.0 ], [ %tmp.3, %then.0 ],
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000444 [ %tmp.9, %then.1 ]
Chris Lattner7c4e9a42008-02-18 18:46:39 +0000445 ret i32 %result.0
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000446}
Chris Lattnerf110a2b2007-05-05 22:44:08 +0000447
448//===---------------------------------------------------------------------===//
449
Chris Lattnere1bb6ab2007-10-03 06:10:59 +0000450Tail recursion elimination is not transforming this function, because it is
451returning n, which fails the isDynamicConstant check in the accumulator
452recursion checks.
453
454long long fib(const long long n) {
455 switch(n) {
456 case 0:
457 case 1:
458 return n;
459 default:
460 return fib(n-1) + fib(n-2);
461 }
462}
463
464//===---------------------------------------------------------------------===//
465
Chris Lattnerc90b8662008-08-10 00:47:21 +0000466Tail recursion elimination should handle:
467
468int pow2m1(int n) {
469 if (n == 0)
470 return 0;
471 return 2 * pow2m1 (n - 1) + 1;
472}
473
474Also, multiplies can be turned into SHL's, so they should be handled as if
475they were associative. "return foo() << 1" can be tail recursion eliminated.
476
477//===---------------------------------------------------------------------===//
478
Chris Lattnerf110a2b2007-05-05 22:44:08 +0000479Argument promotion should promote arguments for recursive functions, like
480this:
481
Chris Lattner7c4e9a42008-02-18 18:46:39 +0000482; RUN: llvm-as < %s | opt -argpromotion | llvm-dis | grep x.val
Chris Lattnerf110a2b2007-05-05 22:44:08 +0000483
Chris Lattner7c4e9a42008-02-18 18:46:39 +0000484define internal i32 @foo(i32* %x) {
Chris Lattnerf110a2b2007-05-05 22:44:08 +0000485entry:
Chris Lattner7c4e9a42008-02-18 18:46:39 +0000486 %tmp = load i32* %x ; <i32> [#uses=0]
487 %tmp.foo = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
488 ret i32 %tmp.foo
Chris Lattnerf110a2b2007-05-05 22:44:08 +0000489}
490
Chris Lattner7c4e9a42008-02-18 18:46:39 +0000491define i32 @bar(i32* %x) {
Chris Lattnerf110a2b2007-05-05 22:44:08 +0000492entry:
Chris Lattner7c4e9a42008-02-18 18:46:39 +0000493 %tmp3 = call i32 @foo( i32* %x ) ; <i32> [#uses=1]
494 ret i32 %tmp3
Chris Lattnerf110a2b2007-05-05 22:44:08 +0000495}
496
Chris Lattner81f2d712007-12-05 23:05:06 +0000497//===---------------------------------------------------------------------===//
Chris Lattner166a2682007-12-28 04:42:05 +0000498
499"basicaa" should know how to look through "or" instructions that act like add
500instructions. For example in this code, the x*4+1 is turned into x*4 | 1, and
501basicaa can't analyze the array subscript, leading to duplicated loads in the
502generated code:
503
504void test(int X, int Y, int a[]) {
505int i;
506 for (i=2; i<1000; i+=4) {
507 a[i+0] = a[i-1+0]*a[i-2+0];
508 a[i+1] = a[i-1+1]*a[i-2+1];
509 a[i+2] = a[i-1+2]*a[i-2+2];
510 a[i+3] = a[i-1+3]*a[i-2+3];
511 }
512}
513
Chris Lattnera1643ba2007-12-28 22:30:05 +0000514//===---------------------------------------------------------------------===//
Chris Lattner166a2682007-12-28 04:42:05 +0000515
Chris Lattnera1643ba2007-12-28 22:30:05 +0000516We should investigate an instruction sinking pass. Consider this silly
517example in pic mode:
518
519#include <assert.h>
520void foo(int x) {
521 assert(x);
522 //...
523}
524
525we compile this to:
526_foo:
527 subl $28, %esp
528 call "L1$pb"
529"L1$pb":
530 popl %eax
531 cmpl $0, 32(%esp)
532 je LBB1_2 # cond_true
533LBB1_1: # return
534 # ...
535 addl $28, %esp
536 ret
537LBB1_2: # cond_true
538...
539
540The PIC base computation (call+popl) is only used on one path through the
541code, but is currently always computed in the entry block. It would be
542better to sink the picbase computation down into the block for the
543assertion, as it is the only one that uses it. This happens for a lot of
544code with early outs.
545
Chris Lattner92c06a02007-12-29 01:05:01 +0000546Another example is loads of arguments, which are usually emitted into the
547entry block on targets like x86. If not used in all paths through a
548function, they should be sunk into the ones that do.
549
Chris Lattnera1643ba2007-12-28 22:30:05 +0000550In this case, whole-function-isel would also handle this.
Chris Lattner166a2682007-12-28 04:42:05 +0000551
552//===---------------------------------------------------------------------===//
Chris Lattnerb3041942008-01-07 21:38:14 +0000553
554Investigate lowering of sparse switch statements into perfect hash tables:
555http://burtleburtle.net/bob/hash/perfect.html
556
557//===---------------------------------------------------------------------===//
Chris Lattnerf61b63e2008-01-09 00:17:57 +0000558
559We should turn things like "load+fabs+store" and "load+fneg+store" into the
560corresponding integer operations. On a yonah, this loop:
561
562double a[256];
Chris Lattner7c4e9a42008-02-18 18:46:39 +0000563void foo() {
564 int i, b;
565 for (b = 0; b < 10000000; b++)
566 for (i = 0; i < 256; i++)
567 a[i] = -a[i];
568}
Chris Lattnerf61b63e2008-01-09 00:17:57 +0000569
570is twice as slow as this loop:
571
572long long a[256];
Chris Lattner7c4e9a42008-02-18 18:46:39 +0000573void foo() {
574 int i, b;
575 for (b = 0; b < 10000000; b++)
576 for (i = 0; i < 256; i++)
577 a[i] ^= (1ULL << 63);
578}
Chris Lattnerf61b63e2008-01-09 00:17:57 +0000579
580and I suspect other processors are similar. On X86 in particular this is a
581big win because doing this with integers allows the use of read/modify/write
582instructions.
583
584//===---------------------------------------------------------------------===//
Chris Lattner83726012008-01-10 18:25:41 +0000585
586DAG Combiner should try to combine small loads into larger loads when
587profitable. For example, we compile this C++ example:
588
589struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
590extern THotKey m_HotKey;
591THotKey GetHotKey () { return m_HotKey; }
592
593into (-O3 -fno-exceptions -static -fomit-frame-pointer):
594
595__Z9GetHotKeyv:
596 pushl %esi
597 movl 8(%esp), %eax
598 movb _m_HotKey+3, %cl
599 movb _m_HotKey+4, %dl
600 movb _m_HotKey+2, %ch
601 movw _m_HotKey, %si
602 movw %si, (%eax)
603 movb %ch, 2(%eax)
604 movb %cl, 3(%eax)
605 movb %dl, 4(%eax)
606 popl %esi
607 ret $4
608
609GCC produces:
610
611__Z9GetHotKeyv:
612 movl _m_HotKey, %edx
613 movl 4(%esp), %eax
614 movl %edx, (%eax)
615 movzwl _m_HotKey+4, %edx
616 movw %dx, 4(%eax)
617 ret $4
618
619The LLVM IR contains the needed alignment info, so we should be able to
620merge the loads and stores into 4-byte loads:
621
622 %struct.THotKey = type { i16, i8, i8, i8 }
623define void @_Z9GetHotKeyv(%struct.THotKey* sret %agg.result) nounwind {
624...
625 %tmp2 = load i16* getelementptr (@m_HotKey, i32 0, i32 0), align 8
626 %tmp5 = load i8* getelementptr (@m_HotKey, i32 0, i32 1), align 2
627 %tmp8 = load i8* getelementptr (@m_HotKey, i32 0, i32 2), align 1
628 %tmp11 = load i8* getelementptr (@m_HotKey, i32 0, i32 3), align 2
629
630Alternatively, we should use a small amount of base-offset alias analysis
631to make it so the scheduler doesn't need to hold all the loads in regs at
632once.
633
634//===---------------------------------------------------------------------===//
Chris Lattner497b7e92008-01-11 06:17:47 +0000635
636We should extend parameter attributes to capture more information about
637pointer parameters for alias analysis. Some ideas:
638
6391. Add a "nocapture" attribute, which indicates that the callee does not store
640 the address of the parameter into a global or any other memory location
641 visible to the callee. This can be used to make basicaa and other analyses
642 more powerful. It is true for things like memcpy, strcat, and many other
643 things, including structs passed by value, most C++ references, etc.
6442. Generalize readonly to be set on parameters. This is important mod/ref
645 info for the function, which is important for basicaa and others. It can
646 also be used by the inliner to avoid inserting a memcpy for byval
647 arguments when the function is inlined.
648
649These functions can be inferred by various analysis passes such as the
Chris Lattner65844fb2008-01-12 18:58:46 +0000650globalsmodrefaa pass. Note that getting #2 right is actually really tricky.
651Consider this code:
652
653struct S; S G;
654void caller(S byvalarg) { G.field = 1; ... }
655void callee() { caller(G); }
656
657The fact that the caller does not modify byval arg is not enough, we need
658to know that it doesn't modify G either. This is very tricky.
Chris Lattner497b7e92008-01-11 06:17:47 +0000659
660//===---------------------------------------------------------------------===//
Nate Begemane9fe65c2008-02-18 18:39:23 +0000661
662We should add an FRINT node to the DAG to model targets that have legal
663implementations of ceil/floor/rint.
Chris Lattner48840f82008-02-28 05:34:27 +0000664
665//===---------------------------------------------------------------------===//
666
Chris Lattnere29536c2008-02-28 17:21:27 +0000667This GCC bug: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34043
668contains a testcase that compiles down to:
669
670 %struct.XMM128 = type { <4 x float> }
671..
672 %src = alloca %struct.XMM128
673..
674 %tmp6263 = bitcast %struct.XMM128* %src to <2 x i64>*
675 %tmp65 = getelementptr %struct.XMM128* %src, i32 0, i32 0
676 store <2 x i64> %tmp5899, <2 x i64>* %tmp6263, align 16
677 %tmp66 = load <4 x float>* %tmp65, align 16
678 %tmp71 = add <4 x float> %tmp66, %tmp66
679
680If the mid-level optimizer turned the bitcast of pointer + store of tmp5899
681into a bitcast of the vector value and a store to the pointer, then the
682store->load could be easily removed.
683
684//===---------------------------------------------------------------------===//
685
Chris Lattner48840f82008-02-28 05:34:27 +0000686Consider:
687
688int test() {
689 long long input[8] = {1,1,1,1,1,1,1,1};
690 foo(input);
691}
692
693We currently compile this into a memcpy from a global array since the
694initializer is fairly large and not memset'able. This is good, but the memcpy
695gets lowered to load/stores in the code generator. This is also ok, except
696that the codegen lowering for memcpy doesn't handle the case when the source
697is a constant global. This gives us atrocious code like this:
698
699 call "L1$pb"
700"L1$pb":
701 popl %eax
702 movl _C.0.1444-"L1$pb"+32(%eax), %ecx
703 movl %ecx, 40(%esp)
704 movl _C.0.1444-"L1$pb"+20(%eax), %ecx
705 movl %ecx, 28(%esp)
706 movl _C.0.1444-"L1$pb"+36(%eax), %ecx
707 movl %ecx, 44(%esp)
708 movl _C.0.1444-"L1$pb"+44(%eax), %ecx
709 movl %ecx, 52(%esp)
710 movl _C.0.1444-"L1$pb"+40(%eax), %ecx
711 movl %ecx, 48(%esp)
712 movl _C.0.1444-"L1$pb"+12(%eax), %ecx
713 movl %ecx, 20(%esp)
714 movl _C.0.1444-"L1$pb"+4(%eax), %ecx
715...
716
717instead of:
718 movl $1, 16(%esp)
719 movl $0, 20(%esp)
720 movl $1, 24(%esp)
721 movl $0, 28(%esp)
722 movl $1, 32(%esp)
723 movl $0, 36(%esp)
724 ...
725
726//===---------------------------------------------------------------------===//
Chris Lattnera11deb02008-03-02 02:51:40 +0000727
728http://llvm.org/PR717:
729
730The following code should compile into "ret int undef". Instead, LLVM
731produces "ret int 0":
732
733int f() {
734 int x = 4;
735 int y;
736 if (x == 3) y = 0;
737 return y;
738}
739
740//===---------------------------------------------------------------------===//
Chris Lattner53b72772008-03-02 19:29:42 +0000741
742The loop unroller should partially unroll loops (instead of peeling them)
743when code growth isn't too bad and when an unroll count allows simplification
744of some code within the loop. One trivial example is:
745
746#include <stdio.h>
747int main() {
748 int nRet = 17;
749 int nLoop;
750 for ( nLoop = 0; nLoop < 1000; nLoop++ ) {
751 if ( nLoop & 1 )
752 nRet += 2;
753 else
754 nRet -= 1;
755 }
756 return nRet;
757}
758
759Unrolling by 2 would eliminate the '&1' in both copies, leading to a net
760reduction in code size. The resultant code would then also be suitable for
761exit value computation.
762
763//===---------------------------------------------------------------------===//
Chris Lattner349155b2008-03-17 01:47:51 +0000764
765We miss a bunch of rotate opportunities on various targets, including ppc, x86,
766etc. On X86, we miss a bunch of 'rotate by variable' cases because the rotate
767matching code in dag combine doesn't look through truncates aggressively
768enough. Here are some testcases reduces from GCC PR17886:
769
770unsigned long long f(unsigned long long x, int y) {
771 return (x << y) | (x >> 64-y);
772}
773unsigned f2(unsigned x, int y){
774 return (x << y) | (x >> 32-y);
775}
776unsigned long long f3(unsigned long long x){
777 int y = 9;
778 return (x << y) | (x >> 64-y);
779}
780unsigned f4(unsigned x){
781 int y = 10;
782 return (x << y) | (x >> 32-y);
783}
784unsigned long long f5(unsigned long long x, unsigned long long y) {
785 return (x << 8) | ((y >> 48) & 0xffull);
786}
787unsigned long long f6(unsigned long long x, unsigned long long y, int z) {
788 switch(z) {
789 case 1:
790 return (x << 8) | ((y >> 48) & 0xffull);
791 case 2:
792 return (x << 16) | ((y >> 40) & 0xffffull);
793 case 3:
794 return (x << 24) | ((y >> 32) & 0xffffffull);
795 case 4:
796 return (x << 32) | ((y >> 24) & 0xffffffffull);
797 default:
798 return (x << 40) | ((y >> 16) & 0xffffffffffull);
799 }
800}
801
Dan Gohmancb747c52008-10-17 21:39:27 +0000802On X86-64, we only handle f2/f3/f4 right. On x86-32, a few of these
Chris Lattner349155b2008-03-17 01:47:51 +0000803generate truly horrible code, instead of using shld and friends. On
804ARM, we end up with calls to L___lshrdi3/L___ashldi3 in f, which is
805badness. PPC64 misses f, f5 and f6. CellSPU aborts in isel.
806
807//===---------------------------------------------------------------------===//
Chris Lattnerf70107f2008-03-20 04:46:13 +0000808
809We do a number of simplifications in simplify libcalls to strength reduce
810standard library functions, but we don't currently merge them together. For
811example, it is useful to merge memcpy(a,b,strlen(b)) -> strcpy. This can only
812be done safely if "b" isn't modified between the strlen and memcpy of course.
813
814//===---------------------------------------------------------------------===//
815
Chris Lattnerb5783102008-05-17 15:37:38 +0000816We should be able to evaluate this loop:
817
818int test(int x_offs) {
819 while (x_offs > 4)
820 x_offs -= 4;
821 return x_offs;
822}
823
824//===---------------------------------------------------------------------===//
Chris Lattner10c5d362008-07-14 00:19:59 +0000825
826Reassociate should turn things like:
827
828int factorial(int X) {
829 return X*X*X*X*X*X*X*X;
830}
831
832into llvm.powi calls, allowing the code generator to produce balanced
833multiplication trees.
834
835//===---------------------------------------------------------------------===//
836
Chris Lattner26e150f2008-08-10 01:14:08 +0000837We generate a horrible libcall for llvm.powi. For example, we compile:
838
839#include <cmath>
840double f(double a) { return std::pow(a, 4); }
841
842into:
843
844__Z1fd:
845 subl $12, %esp
846 movsd 16(%esp), %xmm0
847 movsd %xmm0, (%esp)
848 movl $4, 8(%esp)
849 call L___powidf2$stub
850 addl $12, %esp
851 ret
852
853GCC produces:
854
855__Z1fd:
856 subl $12, %esp
857 movsd 16(%esp), %xmm0
858 mulsd %xmm0, %xmm0
859 mulsd %xmm0, %xmm0
860 movsd %xmm0, (%esp)
861 fldl (%esp)
862 addl $12, %esp
863 ret
864
865//===---------------------------------------------------------------------===//
866
867We compile this program: (from GCC PR11680)
868http://gcc.gnu.org/bugzilla/attachment.cgi?id=4487
869
870Into code that runs the same speed in fast/slow modes, but both modes run 2x
871slower than when compile with GCC (either 4.0 or 4.2):
872
873$ llvm-g++ perf.cpp -O3 -fno-exceptions
874$ time ./a.out fast
8751.821u 0.003s 0:01.82 100.0% 0+0k 0+0io 0pf+0w
876
877$ g++ perf.cpp -O3 -fno-exceptions
878$ time ./a.out fast
8790.821u 0.001s 0:00.82 100.0% 0+0k 0+0io 0pf+0w
880
881It looks like we are making the same inlining decisions, so this may be raw
882codegen badness or something else (haven't investigated).
883
884//===---------------------------------------------------------------------===//
885
886We miss some instcombines for stuff like this:
887void bar (void);
888void foo (unsigned int a) {
889 /* This one is equivalent to a >= (3 << 2). */
890 if ((a >> 2) >= 3)
891 bar ();
892}
893
894A few other related ones are in GCC PR14753.
895
896//===---------------------------------------------------------------------===//
897
898Divisibility by constant can be simplified (according to GCC PR12849) from
899being a mulhi to being a mul lo (cheaper). Testcase:
900
901void bar(unsigned n) {
902 if (n % 3 == 0)
903 true();
904}
905
906I think this basically amounts to a dag combine to simplify comparisons against
907multiply hi's into a comparison against the mullo.
908
909//===---------------------------------------------------------------------===//
Chris Lattner23f35bc2008-08-19 06:22:16 +0000910
911SROA is not promoting the union on the stack in this example, we should end
912up with no allocas.
913
914union vec2d {
915 double e[2];
916 double v __attribute__((vector_size(16)));
917};
918typedef union vec2d vec2d;
919
920static vec2d a={{1,2}}, b={{3,4}};
921
922vec2d foo () {
923 return (vec2d){ .v = a.v + b.v * (vec2d){{5,5}}.v };
924}
925
926//===---------------------------------------------------------------------===//
Chris Lattnerb7fe7082008-10-15 05:53:25 +0000927
928This C++ file:
929void g(); struct A { int n; int m; A& operator++(void) { ++n; if (n == m) g();
930return *this; } A() : n(0), m(0) { } friend bool operator!=(A const& a1,
931A const& a2) { return a1.n != a2.n; } }; void testfunction(A& iter) { A const
932end; while (iter != end) ++iter; }
933
934Compiles down to:
935
936bb: ; preds = %bb3.backedge, %bb.nph
937 %.rle = phi i32 [ %1, %bb.nph ], [ %7, %bb3.backedge ] ; <i32> [#uses=1]
938 %4 = add i32 %.rle, 1 ; <i32> [#uses=2]
939 store i32 %4, i32* %0, align 4
940 %5 = load i32* %3, align 4 ; <i32> [#uses=1]
941 %6 = icmp eq i32 %4, %5 ; <i1> [#uses=1]
942 br i1 %6, label %bb1, label %bb3.backedge
943
944bb1: ; preds = %bb
945 tail call void @_Z1gv()
946 br label %bb3.backedge
947
948bb3.backedge: ; preds = %bb, %bb1
949 %7 = load i32* %0, align 4 ; <i32> [#uses=2]
950
951
952The %7 load is partially redundant with the store of %4 to %0, GVN's PRE
953should remove it, but it doesn't apply to memory objects.
954
955//===---------------------------------------------------------------------===//
956
Chris Lattnerdb039832008-10-15 16:06:03 +0000957Better mod/ref analysis for scanf would allow us to eliminate the vtable and a
958bunch of other stuff from this example (see PR1604):
959
960#include <cstdio>
961struct test {
962 int val;
963 virtual ~test() {}
964};
965
966int main() {
967 test t;
968 std::scanf("%d", &t.val);
969 std::printf("%d\n", t.val);
970}
971
972//===---------------------------------------------------------------------===//
973
Chris Lattner3b364cb2008-10-15 16:33:52 +0000974Instcombine will merge comparisons like (x >= 10) && (x < 20) by producing (x -
97510) u< 10, but only when the comparisons have matching sign.
976
977This could be converted with a similiar technique. (PR1941)
978
979define i1 @test(i8 %x) {
980 %A = icmp uge i8 %x, 5
981 %B = icmp slt i8 %x, 20
982 %C = and i1 %A, %B
983 ret i1 %C
984}
985
986//===---------------------------------------------------------------------===//
Nick Lewyckydf563ca2008-11-27 22:12:22 +0000987
Nick Lewyckyd2f0db12008-11-27 22:41:45 +0000988These functions perform the same computation, but produce different assembly.
Nick Lewyckydf563ca2008-11-27 22:12:22 +0000989
990define i8 @select(i8 %x) readnone nounwind {
991 %A = icmp ult i8 %x, 250
992 %B = select i1 %A, i8 0, i8 1
993 ret i8 %B
994}
995
996define i8 @addshr(i8 %x) readnone nounwind {
997 %A = zext i8 %x to i9
998 %B = add i9 %A, 6 ;; 256 - 250 == 6
999 %C = lshr i9 %B, 8
1000 %D = trunc i9 %C to i8
1001 ret i8 %D
1002}
1003
1004//===---------------------------------------------------------------------===//
Eli Friedman4e16b292008-11-30 07:36:04 +00001005
1006From gcc bug 24696:
1007int
1008f (unsigned long a, unsigned long b, unsigned long c)
1009{
1010 return ((a & (c - 1)) != 0) || ((b & (c - 1)) != 0);
1011}
1012int
1013f (unsigned long a, unsigned long b, unsigned long c)
1014{
1015 return ((a & (c - 1)) != 0) | ((b & (c - 1)) != 0);
1016}
1017Both should combine to ((a|b) & (c-1)) != 0. Currently not optimized with
1018"clang -emit-llvm-bc | opt -std-compile-opts".
1019
1020//===---------------------------------------------------------------------===//
1021
1022From GCC Bug 20192:
1023#define PMD_MASK (~((1UL << 23) - 1))
1024void clear_pmd_range(unsigned long start, unsigned long end)
1025{
1026 if (!(start & ~PMD_MASK) && !(end & ~PMD_MASK))
1027 f();
1028}
1029The expression should optimize to something like
1030"!((start|end)&~PMD_MASK). Currently not optimized with "clang
1031-emit-llvm-bc | opt -std-compile-opts".
1032
1033//===---------------------------------------------------------------------===//
1034
1035From GCC Bug 15241:
1036unsigned int
1037foo (unsigned int a, unsigned int b)
1038{
1039 if (a <= 7 && b <= 7)
1040 baz ();
1041}
1042Should combine to "(a|b) <= 7". Currently not optimized with "clang
1043-emit-llvm-bc | opt -std-compile-opts".
1044
1045//===---------------------------------------------------------------------===//
1046
1047From GCC Bug 3756:
1048int
1049pn (int n)
1050{
1051 return (n >= 0 ? 1 : -1);
1052}
1053Should combine to (n >> 31) | 1. Currently not optimized with "clang
1054-emit-llvm-bc | opt -std-compile-opts | llc".
1055
1056//===---------------------------------------------------------------------===//
1057
1058From GCC Bug 28685:
1059int test(int a, int b)
1060{
1061 int lt = a < b;
1062 int eq = a == b;
1063
1064 return (lt || eq);
1065}
1066Should combine to "a <= b". Currently not optimized with "clang
1067-emit-llvm-bc | opt -std-compile-opts | llc".
1068
1069//===---------------------------------------------------------------------===//
1070
1071void a(int variable)
1072{
1073 if (variable == 4 || variable == 6)
1074 bar();
1075}
1076This should optimize to "if ((variable | 2) == 6)". Currently not
1077optimized with "clang -emit-llvm-bc | opt -std-compile-opts | llc".
1078
1079//===---------------------------------------------------------------------===//
1080
1081unsigned int f(unsigned int i, unsigned int n) {++i; if (i == n) ++i; return
1082i;}
1083unsigned int f2(unsigned int i, unsigned int n) {++i; i += i == n; return i;}
1084These should combine to the same thing. Currently, the first function
1085produces better code on X86.
1086
1087//===---------------------------------------------------------------------===//
1088
Eli Friedman4e16b292008-11-30 07:36:04 +00001089From GCC Bug 15784:
1090#define abs(x) x>0?x:-x
1091int f(int x, int y)
1092{
1093 return (abs(x)) >= 0;
1094}
1095This should optimize to x == INT_MIN. (With -fwrapv.) Currently not
1096optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1097
1098//===---------------------------------------------------------------------===//
1099
1100From GCC Bug 14753:
1101void
1102rotate_cst (unsigned int a)
1103{
1104 a = (a << 10) | (a >> 22);
1105 if (a == 123)
1106 bar ();
1107}
1108void
1109minus_cst (unsigned int a)
1110{
1111 unsigned int tem;
1112
1113 tem = 20 - a;
1114 if (tem == 5)
1115 bar ();
1116}
1117void
1118mask_gt (unsigned int a)
1119{
1120 /* This is equivalent to a > 15. */
1121 if ((a & ~7) > 8)
1122 bar ();
1123}
1124void
1125rshift_gt (unsigned int a)
1126{
1127 /* This is equivalent to a > 23. */
1128 if ((a >> 2) > 5)
1129 bar ();
1130}
1131All should simplify to a single comparison. All of these are
1132currently not optimized with "clang -emit-llvm-bc | opt
1133-std-compile-opts".
1134
1135//===---------------------------------------------------------------------===//
1136
1137From GCC Bug 32605:
1138int c(int* x) {return (char*)x+2 == (char*)x;}
1139Should combine to 0. Currently not optimized with "clang
1140-emit-llvm-bc | opt -std-compile-opts" (although llc can optimize it).
1141
1142//===---------------------------------------------------------------------===//
1143
1144int a(unsigned char* b) {return *b > 99;}
1145There's an unnecessary zext in the generated code with "clang
1146-emit-llvm-bc | opt -std-compile-opts".
1147
1148//===---------------------------------------------------------------------===//
1149
Eli Friedman4e16b292008-11-30 07:36:04 +00001150int a(unsigned b) {return ((b << 31) | (b << 30)) >> 31;}
1151Should be combined to "((b >> 1) | b) & 1". Currently not optimized
1152with "clang -emit-llvm-bc | opt -std-compile-opts".
1153
1154//===---------------------------------------------------------------------===//
1155
1156unsigned a(unsigned x, unsigned y) { return x | (y & 1) | (y & 2);}
1157Should combine to "x | (y & 3)". Currently not optimized with "clang
1158-emit-llvm-bc | opt -std-compile-opts".
1159
1160//===---------------------------------------------------------------------===//
1161
1162unsigned a(unsigned a) {return ((a | 1) & 3) | (a & -4);}
1163Should combine to "a | 1". Currently not optimized with "clang
1164-emit-llvm-bc | opt -std-compile-opts".
1165
1166//===---------------------------------------------------------------------===//
1167
Eli Friedman4e16b292008-11-30 07:36:04 +00001168int a(int a, int b, int c) {return (~a & c) | ((c|a) & b);}
1169Should fold to "(~a & c) | (a & b)". Currently not optimized with
1170"clang -emit-llvm-bc | opt -std-compile-opts".
1171
1172//===---------------------------------------------------------------------===//
1173
1174int a(int a,int b) {return (~(a|b))|a;}
1175Should fold to "a|~b". Currently not optimized with "clang
1176-emit-llvm-bc | opt -std-compile-opts".
1177
1178//===---------------------------------------------------------------------===//
1179
1180int a(int a, int b) {return (a&&b) || (a&&!b);}
1181Should fold to "a". Currently not optimized with "clang -emit-llvm-bc
1182| opt -std-compile-opts".
1183
1184//===---------------------------------------------------------------------===//
1185
1186int a(int a, int b, int c) {return (a&&b) || (!a&&c);}
1187Should fold to "a ? b : c", or at least something sane. Currently not
1188optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1189
1190//===---------------------------------------------------------------------===//
1191
1192int a(int a, int b, int c) {return (a&&b) || (a&&c) || (a&&b&&c);}
1193Should fold to a && (b || c). Currently not optimized with "clang
1194-emit-llvm-bc | opt -std-compile-opts".
1195
1196//===---------------------------------------------------------------------===//
1197
1198int a(int x) {return x | ((x & 8) ^ 8);}
1199Should combine to x | 8. Currently not optimized with "clang
1200-emit-llvm-bc | opt -std-compile-opts".
1201
1202//===---------------------------------------------------------------------===//
1203
1204int a(int x) {return x ^ ((x & 8) ^ 8);}
1205Should also combine to x | 8. Currently not optimized with "clang
1206-emit-llvm-bc | opt -std-compile-opts".
1207
1208//===---------------------------------------------------------------------===//
1209
1210int a(int x) {return (x & 8) == 0 ? -1 : -9;}
1211Should combine to (x | -9) ^ 8. Currently not optimized with "clang
1212-emit-llvm-bc | opt -std-compile-opts".
1213
1214//===---------------------------------------------------------------------===//
1215
1216int a(int x) {return (x & 8) == 0 ? -9 : -1;}
1217Should combine to x | -9. Currently not optimized with "clang
1218-emit-llvm-bc | opt -std-compile-opts".
1219
1220//===---------------------------------------------------------------------===//
1221
1222int a(int x) {return ((x | -9) ^ 8) & x;}
1223Should combine to x & -9. Currently not optimized with "clang
1224-emit-llvm-bc | opt -std-compile-opts".
1225
1226//===---------------------------------------------------------------------===//
1227
1228unsigned a(unsigned a) {return a * 0x11111111 >> 28 & 1;}
1229Should combine to "a * 0x88888888 >> 31". Currently not optimized
1230with "clang -emit-llvm-bc | opt -std-compile-opts".
1231
1232//===---------------------------------------------------------------------===//
1233
1234unsigned a(char* x) {if ((*x & 32) == 0) return b();}
1235There's an unnecessary zext in the generated code with "clang
1236-emit-llvm-bc | opt -std-compile-opts".
1237
1238//===---------------------------------------------------------------------===//
1239
1240unsigned a(unsigned long long x) {return 40 * (x >> 1);}
1241Should combine to "20 * (((unsigned)x) & -2)". Currently not
1242optimized with "clang -emit-llvm-bc | opt -std-compile-opts".
1243
1244//===---------------------------------------------------------------------===//
Bill Wendling3bdcda82008-12-02 05:12:47 +00001245
1246We would like to do the following transform in the instcombiner:
1247
1248 -X/C -> X/-C
1249
1250However, this isn't valid if (-X) overflows. We can implement this when we
1251have the concept of a "C signed subtraction" operator that which is undefined
1252on overflow.
1253
1254//===---------------------------------------------------------------------===//
Chris Lattner88d84b22008-12-02 06:32:34 +00001255
1256This was noticed in the entryblock for grokdeclarator in 403.gcc:
1257
1258 %tmp = icmp eq i32 %decl_context, 4
1259 %decl_context_addr.0 = select i1 %tmp, i32 3, i32 %decl_context
1260 %tmp1 = icmp eq i32 %decl_context_addr.0, 1
1261 %decl_context_addr.1 = select i1 %tmp1, i32 0, i32 %decl_context_addr.0
1262
1263tmp1 should be simplified to something like:
1264 (!tmp || decl_context == 1)
1265
1266This allows recursive simplifications, tmp1 is used all over the place in
1267the function, e.g. by:
1268
1269 %tmp23 = icmp eq i32 %decl_context_addr.1, 0 ; <i1> [#uses=1]
1270 %tmp24 = xor i1 %tmp1, true ; <i1> [#uses=1]
1271 %or.cond8 = and i1 %tmp23, %tmp24 ; <i1> [#uses=1]
1272
1273later.
1274
Chris Lattner78a7e7c2008-12-06 19:28:22 +00001275//===---------------------------------------------------------------------===//
1276
1277Store sinking: This code:
1278
1279void f (int n, int *cond, int *res) {
1280 int i;
1281 *res = 0;
1282 for (i = 0; i < n; i++)
1283 if (*cond)
1284 *res ^= 234; /* (*) */
1285}
1286
1287On this function GVN hoists the fully redundant value of *res, but nothing
1288moves the store out. This gives us this code:
1289
1290bb: ; preds = %bb2, %entry
1291 %.rle = phi i32 [ 0, %entry ], [ %.rle6, %bb2 ]
1292 %i.05 = phi i32 [ 0, %entry ], [ %indvar.next, %bb2 ]
1293 %1 = load i32* %cond, align 4
1294 %2 = icmp eq i32 %1, 0
1295 br i1 %2, label %bb2, label %bb1
1296
1297bb1: ; preds = %bb
1298 %3 = xor i32 %.rle, 234
1299 store i32 %3, i32* %res, align 4
1300 br label %bb2
1301
1302bb2: ; preds = %bb, %bb1
1303 %.rle6 = phi i32 [ %3, %bb1 ], [ %.rle, %bb ]
1304 %indvar.next = add i32 %i.05, 1
1305 %exitcond = icmp eq i32 %indvar.next, %n
1306 br i1 %exitcond, label %return, label %bb
1307
1308DSE should sink partially dead stores to get the store out of the loop.
1309
Chris Lattner6a09a742008-12-06 22:52:12 +00001310Here's another partial dead case:
1311http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12395
1312
Chris Lattner78a7e7c2008-12-06 19:28:22 +00001313//===---------------------------------------------------------------------===//
1314
1315Scalar PRE hoists the mul in the common block up to the else:
1316
1317int test (int a, int b, int c, int g) {
1318 int d, e;
1319 if (a)
1320 d = b * c;
1321 else
1322 d = b - c;
1323 e = b * c + g;
1324 return d + e;
1325}
1326
1327It would be better to do the mul once to reduce codesize above the if.
1328This is GCC PR38204.
1329
1330//===---------------------------------------------------------------------===//
1331
1332GCC PR37810 is an interesting case where we should sink load/store reload
1333into the if block and outside the loop, so we don't reload/store it on the
1334non-call path.
1335
1336for () {
1337 *P += 1;
1338 if ()
1339 call();
1340 else
1341 ...
1342->
1343tmp = *P
1344for () {
1345 tmp += 1;
1346 if () {
1347 *P = tmp;
1348 call();
1349 tmp = *P;
1350 } else ...
1351}
1352*P = tmp;
1353
1354//===---------------------------------------------------------------------===//
1355
1356GCC PR37166: Sinking of loads prevents SROA'ing the "g" struct on the stack
1357leading to excess stack traffic. This could be handled by GVN with some crazy
1358symbolic phi translation. The code we get looks like (g is on the stack):
1359
1360bb2: ; preds = %bb1
1361..
1362 %9 = getelementptr %struct.f* %g, i32 0, i32 0
1363 store i32 %8, i32* %9, align bel %bb3
1364
1365bb3: ; preds = %bb1, %bb2, %bb
1366 %c_addr.0 = phi %struct.f* [ %g, %bb2 ], [ %c, %bb ], [ %c, %bb1 ]
1367 %b_addr.0 = phi %struct.f* [ %b, %bb2 ], [ %g, %bb ], [ %b, %bb1 ]
1368 %10 = getelementptr %struct.f* %c_addr.0, i32 0, i32 0
1369 %11 = load i32* %10, align 4
1370
1371%11 is fully redundant, an in BB2 it should have the value %8.
1372
Chris Lattner6a09a742008-12-06 22:52:12 +00001373GCC PR33344 is a similar case.
1374
Chris Lattner78a7e7c2008-12-06 19:28:22 +00001375//===---------------------------------------------------------------------===//
1376
Chris Lattner6a09a742008-12-06 22:52:12 +00001377There are many load PRE testcases in testsuite/gcc.dg/tree-ssa/loadpre* in the
1378GCC testsuite. There are many pre testcases as ssa-pre-*.c
1379
1380Other simple load PRE cases:
1381http://gcc.gnu.org/bugzilla/show_bug.cgi?id=35287
1382http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34677 (licm does this)
1383http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29789 (SPEC2K6)
1384http://gcc.gnu.org/bugzilla/show_bug.cgi?id=23455
1385http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14705
1386
1387//===---------------------------------------------------------------------===//
1388
1389When GVN/PRE finds a store of float* to a must aliases pointer when expecting
1390an int*, it should turn it into a bitcast. This is a nice generalization of
Chris Lattner630c99f2008-12-07 00:15:10 +00001391the SROA hack that would apply to other cases, e.g.:
1392
1393int foo(int C, int *P, float X) {
1394 if (C) {
1395 bar();
1396 *P = 42;
1397 } else
1398 *(float*)P = X;
1399
1400 return *P;
1401}
1402
Chris Lattner6a09a742008-12-06 22:52:12 +00001403
1404One example (that requires crazy phi translation) is:
1405http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16799
1406
1407//===---------------------------------------------------------------------===//
1408
1409A/B get pinned to the stack because we turn an if/then into a select instead
1410of PRE'ing the load/store. This may be fixable in instcombine:
1411http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37892
1412
1413Interesting missed case because of control flow flattening (should be 2 loads):
1414http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26629
1415
1416
1417//===---------------------------------------------------------------------===//
1418
1419http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19633
1420We could eliminate the branch condition here, loading from null is undefined:
1421
1422struct S { int w, x, y, z; };
1423struct T { int r; struct S s; };
1424void bar (struct S, int);
1425void foo (int a, struct T b)
1426{
1427 struct S *c = 0;
1428 if (a)
1429 c = &b.s;
1430 bar (*c, a);
1431}
1432
1433//===---------------------------------------------------------------------===//
Chris Lattner88d84b22008-12-02 06:32:34 +00001434