Dan Gohman | f17a25c | 2007-07-18 16:29:46 +0000 | [diff] [blame] | 1 | Target Independent Opportunities: |
| 2 | |
| 3 | //===---------------------------------------------------------------------===// |
| 4 | |
Chris Lattner | 0124421 | 2008-01-07 07:46:23 +0000 | [diff] [blame^] | 5 | We should make the various target's "IMPLICIT_DEF" instructions be a single |
| 6 | target-independent opcode like TargetInstrInfo::INLINEASM. This would allow |
| 7 | us to eliminate the TargetInstrDesc::isImplicitDef() method, and would allow |
| 8 | us to avoid having to define this for every target for every register class. |
| 9 | |
| 10 | //===---------------------------------------------------------------------===// |
| 11 | |
Dan Gohman | f17a25c | 2007-07-18 16:29:46 +0000 | [diff] [blame] | 12 | With the recent changes to make the implicit def/use set explicit in |
| 13 | machineinstrs, we should change the target descriptions for 'call' instructions |
| 14 | so that the .td files don't list all the call-clobbered registers as implicit |
| 15 | defs. Instead, these should be added by the code generator (e.g. on the dag). |
| 16 | |
| 17 | This has a number of uses: |
| 18 | |
| 19 | 1. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions |
| 20 | for their different impdef sets. |
| 21 | 2. Targets with multiple calling convs (e.g. x86) which have different clobber |
| 22 | sets don't need copies of call instructions. |
| 23 | 3. 'Interprocedural register allocation' can be done to reduce the clobber sets |
| 24 | of calls. |
| 25 | |
| 26 | //===---------------------------------------------------------------------===// |
| 27 | |
| 28 | Make the PPC branch selector target independant |
| 29 | |
| 30 | //===---------------------------------------------------------------------===// |
| 31 | |
| 32 | Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and |
| 33 | precision don't matter (ffastmath). Misc/mandel will like this. :) |
| 34 | |
| 35 | //===---------------------------------------------------------------------===// |
| 36 | |
| 37 | Solve this DAG isel folding deficiency: |
| 38 | |
| 39 | int X, Y; |
| 40 | |
| 41 | void fn1(void) |
| 42 | { |
| 43 | X = X | (Y << 3); |
| 44 | } |
| 45 | |
| 46 | compiles to |
| 47 | |
| 48 | fn1: |
| 49 | movl Y, %eax |
| 50 | shll $3, %eax |
| 51 | orl X, %eax |
| 52 | movl %eax, X |
| 53 | ret |
| 54 | |
| 55 | The problem is the store's chain operand is not the load X but rather |
| 56 | a TokenFactor of the load X and load Y, which prevents the folding. |
| 57 | |
| 58 | There are two ways to fix this: |
| 59 | |
| 60 | 1. The dag combiner can start using alias analysis to realize that y/x |
| 61 | don't alias, making the store to X not dependent on the load from Y. |
| 62 | 2. The generated isel could be made smarter in the case it can't |
| 63 | disambiguate the pointers. |
| 64 | |
| 65 | Number 1 is the preferred solution. |
| 66 | |
| 67 | This has been "fixed" by a TableGen hack. But that is a short term workaround |
| 68 | which will be removed once the proper fix is made. |
| 69 | |
| 70 | //===---------------------------------------------------------------------===// |
| 71 | |
| 72 | On targets with expensive 64-bit multiply, we could LSR this: |
| 73 | |
| 74 | for (i = ...; ++i) { |
| 75 | x = 1ULL << i; |
| 76 | |
| 77 | into: |
| 78 | long long tmp = 1; |
| 79 | for (i = ...; ++i, tmp+=tmp) |
| 80 | x = tmp; |
| 81 | |
| 82 | This would be a win on ppc32, but not x86 or ppc64. |
| 83 | |
| 84 | //===---------------------------------------------------------------------===// |
| 85 | |
| 86 | Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0) |
| 87 | |
| 88 | //===---------------------------------------------------------------------===// |
| 89 | |
| 90 | Reassociate should turn: X*X*X*X -> t=(X*X) (t*t) to eliminate a multiply. |
| 91 | |
| 92 | //===---------------------------------------------------------------------===// |
| 93 | |
| 94 | Interesting? testcase for add/shift/mul reassoc: |
| 95 | |
| 96 | int bar(int x, int y) { |
| 97 | return x*x*x+y+x*x*x*x*x*y*y*y*y; |
| 98 | } |
| 99 | int foo(int z, int n) { |
| 100 | return bar(z, n) + bar(2*z, 2*n); |
| 101 | } |
| 102 | |
| 103 | Reassociate should handle the example in GCC PR16157. |
| 104 | |
| 105 | //===---------------------------------------------------------------------===// |
| 106 | |
| 107 | These two functions should generate the same code on big-endian systems: |
| 108 | |
| 109 | int g(int *j,int *l) { return memcmp(j,l,4); } |
| 110 | int h(int *j, int *l) { return *j - *l; } |
| 111 | |
| 112 | this could be done in SelectionDAGISel.cpp, along with other special cases, |
| 113 | for 1,2,4,8 bytes. |
| 114 | |
| 115 | //===---------------------------------------------------------------------===// |
| 116 | |
| 117 | It would be nice to revert this patch: |
| 118 | http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html |
| 119 | |
| 120 | And teach the dag combiner enough to simplify the code expanded before |
| 121 | legalize. It seems plausible that this knowledge would let it simplify other |
| 122 | stuff too. |
| 123 | |
| 124 | //===---------------------------------------------------------------------===// |
| 125 | |
| 126 | For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal |
| 127 | to the type size. It works but can be overly conservative as the alignment of |
| 128 | specific vector types are target dependent. |
| 129 | |
| 130 | //===---------------------------------------------------------------------===// |
| 131 | |
| 132 | We should add 'unaligned load/store' nodes, and produce them from code like |
| 133 | this: |
| 134 | |
| 135 | v4sf example(float *P) { |
| 136 | return (v4sf){P[0], P[1], P[2], P[3] }; |
| 137 | } |
| 138 | |
| 139 | //===---------------------------------------------------------------------===// |
| 140 | |
Dan Gohman | f17a25c | 2007-07-18 16:29:46 +0000 | [diff] [blame] | 141 | Add support for conditional increments, and other related patterns. Instead |
| 142 | of: |
| 143 | |
| 144 | movl 136(%esp), %eax |
| 145 | cmpl $0, %eax |
| 146 | je LBB16_2 #cond_next |
| 147 | LBB16_1: #cond_true |
| 148 | incl _foo |
| 149 | LBB16_2: #cond_next |
| 150 | |
| 151 | emit: |
| 152 | movl _foo, %eax |
| 153 | cmpl $1, %edi |
| 154 | sbbl $-1, %eax |
| 155 | movl %eax, _foo |
| 156 | |
| 157 | //===---------------------------------------------------------------------===// |
| 158 | |
| 159 | Combine: a = sin(x), b = cos(x) into a,b = sincos(x). |
| 160 | |
| 161 | Expand these to calls of sin/cos and stores: |
| 162 | double sincos(double x, double *sin, double *cos); |
| 163 | float sincosf(float x, float *sin, float *cos); |
| 164 | long double sincosl(long double x, long double *sin, long double *cos); |
| 165 | |
| 166 | Doing so could allow SROA of the destination pointers. See also: |
| 167 | http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687 |
| 168 | |
| 169 | //===---------------------------------------------------------------------===// |
| 170 | |
| 171 | Scalar Repl cannot currently promote this testcase to 'ret long cst': |
| 172 | |
| 173 | %struct.X = type { i32, i32 } |
| 174 | %struct.Y = type { %struct.X } |
| 175 | |
| 176 | define i64 @bar() { |
| 177 | %retval = alloca %struct.Y, align 8 |
| 178 | %tmp12 = getelementptr %struct.Y* %retval, i32 0, i32 0, i32 0 |
| 179 | store i32 0, i32* %tmp12 |
| 180 | %tmp15 = getelementptr %struct.Y* %retval, i32 0, i32 0, i32 1 |
| 181 | store i32 1, i32* %tmp15 |
| 182 | %retval.upgrd.1 = bitcast %struct.Y* %retval to i64* |
| 183 | %retval.upgrd.2 = load i64* %retval.upgrd.1 |
| 184 | ret i64 %retval.upgrd.2 |
| 185 | } |
| 186 | |
| 187 | it should be extended to do so. |
| 188 | |
| 189 | //===---------------------------------------------------------------------===// |
| 190 | |
| 191 | -scalarrepl should promote this to be a vector scalar. |
| 192 | |
| 193 | %struct..0anon = type { <4 x float> } |
| 194 | |
| 195 | define void @test1(<4 x float> %V, float* %P) { |
| 196 | %u = alloca %struct..0anon, align 16 |
| 197 | %tmp = getelementptr %struct..0anon* %u, i32 0, i32 0 |
| 198 | store <4 x float> %V, <4 x float>* %tmp |
| 199 | %tmp1 = bitcast %struct..0anon* %u to [4 x float]* |
| 200 | %tmp.upgrd.1 = getelementptr [4 x float]* %tmp1, i32 0, i32 1 |
| 201 | %tmp.upgrd.2 = load float* %tmp.upgrd.1 |
| 202 | %tmp3 = mul float %tmp.upgrd.2, 2.000000e+00 |
| 203 | store float %tmp3, float* %P |
| 204 | ret void |
| 205 | } |
| 206 | |
| 207 | //===---------------------------------------------------------------------===// |
| 208 | |
| 209 | Turn this into a single byte store with no load (the other 3 bytes are |
| 210 | unmodified): |
| 211 | |
| 212 | void %test(uint* %P) { |
| 213 | %tmp = load uint* %P |
| 214 | %tmp14 = or uint %tmp, 3305111552 |
| 215 | %tmp15 = and uint %tmp14, 3321888767 |
| 216 | store uint %tmp15, uint* %P |
| 217 | ret void |
| 218 | } |
| 219 | |
| 220 | //===---------------------------------------------------------------------===// |
| 221 | |
| 222 | dag/inst combine "clz(x)>>5 -> x==0" for 32-bit x. |
| 223 | |
| 224 | Compile: |
| 225 | |
| 226 | int bar(int x) |
| 227 | { |
| 228 | int t = __builtin_clz(x); |
| 229 | return -(t>>5); |
| 230 | } |
| 231 | |
| 232 | to: |
| 233 | |
| 234 | _bar: addic r3,r3,-1 |
| 235 | subfe r3,r3,r3 |
| 236 | blr |
| 237 | |
| 238 | //===---------------------------------------------------------------------===// |
| 239 | |
| 240 | Legalize should lower ctlz like this: |
| 241 | ctlz(x) = popcnt((x-1) & ~x) |
| 242 | |
| 243 | on targets that have popcnt but not ctlz. itanium, what else? |
| 244 | |
| 245 | //===---------------------------------------------------------------------===// |
| 246 | |
| 247 | quantum_sigma_x in 462.libquantum contains the following loop: |
| 248 | |
| 249 | for(i=0; i<reg->size; i++) |
| 250 | { |
| 251 | /* Flip the target bit of each basis state */ |
| 252 | reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target); |
| 253 | } |
| 254 | |
| 255 | Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just |
| 256 | so cool to turn it into something like: |
| 257 | |
| 258 | long long Res = ((MAX_UNSIGNED) 1 << target); |
| 259 | if (target < 32) { |
| 260 | for(i=0; i<reg->size; i++) |
| 261 | reg->node[i].state ^= Res & 0xFFFFFFFFULL; |
| 262 | } else { |
| 263 | for(i=0; i<reg->size; i++) |
| 264 | reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL |
| 265 | } |
| 266 | |
| 267 | ... which would only do one 32-bit XOR per loop iteration instead of two. |
| 268 | |
| 269 | It would also be nice to recognize the reg->size doesn't alias reg->node[i], but |
| 270 | alas... |
| 271 | |
| 272 | //===---------------------------------------------------------------------===// |
| 273 | |
| 274 | This isn't recognized as bswap by instcombine: |
| 275 | |
| 276 | unsigned int swap_32(unsigned int v) { |
| 277 | v = ((v & 0x00ff00ffU) << 8) | ((v & 0xff00ff00U) >> 8); |
| 278 | v = ((v & 0x0000ffffU) << 16) | ((v & 0xffff0000U) >> 16); |
| 279 | return v; |
| 280 | } |
| 281 | |
| 282 | Nor is this (yes, it really is bswap): |
| 283 | |
| 284 | unsigned long reverse(unsigned v) { |
| 285 | unsigned t; |
| 286 | t = v ^ ((v << 16) | (v >> 16)); |
| 287 | t &= ~0xff0000; |
| 288 | v = (v << 24) | (v >> 8); |
| 289 | return v ^ (t >> 8); |
| 290 | } |
| 291 | |
| 292 | //===---------------------------------------------------------------------===// |
| 293 | |
| 294 | These should turn into single 16-bit (unaligned?) loads on little/big endian |
| 295 | processors. |
| 296 | |
| 297 | unsigned short read_16_le(const unsigned char *adr) { |
| 298 | return adr[0] | (adr[1] << 8); |
| 299 | } |
| 300 | unsigned short read_16_be(const unsigned char *adr) { |
| 301 | return (adr[0] << 8) | adr[1]; |
| 302 | } |
| 303 | |
| 304 | //===---------------------------------------------------------------------===// |
| 305 | |
| 306 | -instcombine should handle this transform: |
| 307 | icmp pred (sdiv X / C1 ), C2 |
| 308 | when X, C1, and C2 are unsigned. Similarly for udiv and signed operands. |
| 309 | |
| 310 | Currently InstCombine avoids this transform but will do it when the signs of |
| 311 | the operands and the sign of the divide match. See the FIXME in |
| 312 | InstructionCombining.cpp in the visitSetCondInst method after the switch case |
| 313 | for Instruction::UDiv (around line 4447) for more details. |
| 314 | |
| 315 | The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of |
| 316 | this construct. |
| 317 | |
| 318 | //===---------------------------------------------------------------------===// |
| 319 | |
| 320 | Instcombine misses several of these cases (see the testcase in the patch): |
| 321 | http://gcc.gnu.org/ml/gcc-patches/2006-10/msg01519.html |
| 322 | |
| 323 | //===---------------------------------------------------------------------===// |
| 324 | |
| 325 | viterbi speeds up *significantly* if the various "history" related copy loops |
| 326 | are turned into memcpy calls at the source level. We need a "loops to memcpy" |
| 327 | pass. |
| 328 | |
| 329 | //===---------------------------------------------------------------------===// |
| 330 | |
| 331 | Consider: |
| 332 | |
| 333 | typedef unsigned U32; |
| 334 | typedef unsigned long long U64; |
| 335 | int test (U32 *inst, U64 *regs) { |
| 336 | U64 effective_addr2; |
| 337 | U32 temp = *inst; |
| 338 | int r1 = (temp >> 20) & 0xf; |
| 339 | int b2 = (temp >> 16) & 0xf; |
| 340 | effective_addr2 = temp & 0xfff; |
| 341 | if (b2) effective_addr2 += regs[b2]; |
| 342 | b2 = (temp >> 12) & 0xf; |
| 343 | if (b2) effective_addr2 += regs[b2]; |
| 344 | effective_addr2 &= regs[4]; |
| 345 | if ((effective_addr2 & 3) == 0) |
| 346 | return 1; |
| 347 | return 0; |
| 348 | } |
| 349 | |
| 350 | Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems, |
| 351 | we don't eliminate the computation of the top half of effective_addr2 because |
| 352 | we don't have whole-function selection dags. On x86, this means we use one |
| 353 | extra register for the function when effective_addr2 is declared as U64 than |
| 354 | when it is declared U32. |
| 355 | |
| 356 | //===---------------------------------------------------------------------===// |
| 357 | |
| 358 | Promote for i32 bswap can use i64 bswap + shr. Useful on targets with 64-bit |
| 359 | regs and bswap, like itanium. |
| 360 | |
| 361 | //===---------------------------------------------------------------------===// |
| 362 | |
| 363 | LSR should know what GPR types a target has. This code: |
| 364 | |
| 365 | volatile short X, Y; // globals |
| 366 | |
| 367 | void foo(int N) { |
| 368 | int i; |
| 369 | for (i = 0; i < N; i++) { X = i; Y = i*4; } |
| 370 | } |
| 371 | |
| 372 | produces two identical IV's (after promotion) on PPC/ARM: |
| 373 | |
| 374 | LBB1_1: @bb.preheader |
| 375 | mov r3, #0 |
| 376 | mov r2, r3 |
| 377 | mov r1, r3 |
| 378 | LBB1_2: @bb |
| 379 | ldr r12, LCPI1_0 |
| 380 | ldr r12, [r12] |
| 381 | strh r2, [r12] |
| 382 | ldr r12, LCPI1_1 |
| 383 | ldr r12, [r12] |
| 384 | strh r3, [r12] |
| 385 | add r1, r1, #1 <- [0,+,1] |
| 386 | add r3, r3, #4 |
| 387 | add r2, r2, #1 <- [0,+,1] |
| 388 | cmp r1, r0 |
| 389 | bne LBB1_2 @bb |
| 390 | |
| 391 | |
| 392 | //===---------------------------------------------------------------------===// |
| 393 | |
| 394 | Tail call elim should be more aggressive, checking to see if the call is |
| 395 | followed by an uncond branch to an exit block. |
| 396 | |
| 397 | ; This testcase is due to tail-duplication not wanting to copy the return |
| 398 | ; instruction into the terminating blocks because there was other code |
| 399 | ; optimized out of the function after the taildup happened. |
| 400 | ;RUN: llvm-upgrade < %s | llvm-as | opt -tailcallelim | llvm-dis | not grep call |
| 401 | |
| 402 | int %t4(int %a) { |
| 403 | entry: |
| 404 | %tmp.1 = and int %a, 1 |
| 405 | %tmp.2 = cast int %tmp.1 to bool |
| 406 | br bool %tmp.2, label %then.0, label %else.0 |
| 407 | |
| 408 | then.0: |
| 409 | %tmp.5 = add int %a, -1 |
| 410 | %tmp.3 = call int %t4( int %tmp.5 ) |
| 411 | br label %return |
| 412 | |
| 413 | else.0: |
| 414 | %tmp.7 = setne int %a, 0 |
| 415 | br bool %tmp.7, label %then.1, label %return |
| 416 | |
| 417 | then.1: |
| 418 | %tmp.11 = add int %a, -2 |
| 419 | %tmp.9 = call int %t4( int %tmp.11 ) |
| 420 | br label %return |
| 421 | |
| 422 | return: |
| 423 | %result.0 = phi int [ 0, %else.0 ], [ %tmp.3, %then.0 ], |
| 424 | [ %tmp.9, %then.1 ] |
| 425 | ret int %result.0 |
| 426 | } |
| 427 | |
| 428 | //===---------------------------------------------------------------------===// |
| 429 | |
Chris Lattner | 00159fc | 2007-10-03 06:10:59 +0000 | [diff] [blame] | 430 | Tail recursion elimination is not transforming this function, because it is |
| 431 | returning n, which fails the isDynamicConstant check in the accumulator |
| 432 | recursion checks. |
| 433 | |
| 434 | long long fib(const long long n) { |
| 435 | switch(n) { |
| 436 | case 0: |
| 437 | case 1: |
| 438 | return n; |
| 439 | default: |
| 440 | return fib(n-1) + fib(n-2); |
| 441 | } |
| 442 | } |
| 443 | |
| 444 | //===---------------------------------------------------------------------===// |
| 445 | |
Dan Gohman | f17a25c | 2007-07-18 16:29:46 +0000 | [diff] [blame] | 446 | Argument promotion should promote arguments for recursive functions, like |
| 447 | this: |
| 448 | |
| 449 | ; RUN: llvm-upgrade < %s | llvm-as | opt -argpromotion | llvm-dis | grep x.val |
| 450 | |
| 451 | implementation ; Functions: |
| 452 | |
| 453 | internal int %foo(int* %x) { |
| 454 | entry: |
| 455 | %tmp = load int* %x |
| 456 | %tmp.foo = call int %foo(int *%x) |
| 457 | ret int %tmp.foo |
| 458 | } |
| 459 | |
| 460 | int %bar(int* %x) { |
| 461 | entry: |
| 462 | %tmp3 = call int %foo( int* %x) ; <int>[#uses=1] |
| 463 | ret int %tmp3 |
| 464 | } |
| 465 | |
Chris Lattner | 421a733 | 2007-12-05 23:05:06 +0000 | [diff] [blame] | 466 | //===---------------------------------------------------------------------===// |
Chris Lattner | 072ab75 | 2007-12-28 04:42:05 +0000 | [diff] [blame] | 467 | |
| 468 | "basicaa" should know how to look through "or" instructions that act like add |
| 469 | instructions. For example in this code, the x*4+1 is turned into x*4 | 1, and |
| 470 | basicaa can't analyze the array subscript, leading to duplicated loads in the |
| 471 | generated code: |
| 472 | |
| 473 | void test(int X, int Y, int a[]) { |
| 474 | int i; |
| 475 | for (i=2; i<1000; i+=4) { |
| 476 | a[i+0] = a[i-1+0]*a[i-2+0]; |
| 477 | a[i+1] = a[i-1+1]*a[i-2+1]; |
| 478 | a[i+2] = a[i-1+2]*a[i-2+2]; |
| 479 | a[i+3] = a[i-1+3]*a[i-2+3]; |
| 480 | } |
| 481 | } |
| 482 | |
Chris Lattner | fe7fe91 | 2007-12-28 22:30:05 +0000 | [diff] [blame] | 483 | //===---------------------------------------------------------------------===// |
Chris Lattner | 072ab75 | 2007-12-28 04:42:05 +0000 | [diff] [blame] | 484 | |
Chris Lattner | fe7fe91 | 2007-12-28 22:30:05 +0000 | [diff] [blame] | 485 | We should investigate an instruction sinking pass. Consider this silly |
| 486 | example in pic mode: |
| 487 | |
| 488 | #include <assert.h> |
| 489 | void foo(int x) { |
| 490 | assert(x); |
| 491 | //... |
| 492 | } |
| 493 | |
| 494 | we compile this to: |
| 495 | _foo: |
| 496 | subl $28, %esp |
| 497 | call "L1$pb" |
| 498 | "L1$pb": |
| 499 | popl %eax |
| 500 | cmpl $0, 32(%esp) |
| 501 | je LBB1_2 # cond_true |
| 502 | LBB1_1: # return |
| 503 | # ... |
| 504 | addl $28, %esp |
| 505 | ret |
| 506 | LBB1_2: # cond_true |
| 507 | ... |
| 508 | |
| 509 | The PIC base computation (call+popl) is only used on one path through the |
| 510 | code, but is currently always computed in the entry block. It would be |
| 511 | better to sink the picbase computation down into the block for the |
| 512 | assertion, as it is the only one that uses it. This happens for a lot of |
| 513 | code with early outs. |
| 514 | |
Chris Lattner | be9fe9d | 2007-12-29 01:05:01 +0000 | [diff] [blame] | 515 | Another example is loads of arguments, which are usually emitted into the |
| 516 | entry block on targets like x86. If not used in all paths through a |
| 517 | function, they should be sunk into the ones that do. |
| 518 | |
Chris Lattner | fe7fe91 | 2007-12-28 22:30:05 +0000 | [diff] [blame] | 519 | In this case, whole-function-isel would also handle this. |
Chris Lattner | 072ab75 | 2007-12-28 04:42:05 +0000 | [diff] [blame] | 520 | |
| 521 | //===---------------------------------------------------------------------===// |