Rafael Espindola | f4d4005 | 2006-08-22 12:22:46 +0000 | [diff] [blame] | 1 | //===---------------------------------------------------------------------===// |
| 2 | // Random ideas for the ARM backend. |
| 3 | //===---------------------------------------------------------------------===// |
| 4 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 5 | Reimplement 'select' in terms of 'SEL'. |
Rafael Espindola | f4d4005 | 2006-08-22 12:22:46 +0000 | [diff] [blame] | 6 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 7 | * We would really like to support UXTAB16, but we need to prove that the |
| 8 | add doesn't need to overflow between the two 16-bit chunks. |
Rafael Espindola | 7564549 | 2006-09-22 11:36:17 +0000 | [diff] [blame] | 9 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 10 | * Implement pre/post increment support. (e.g. PR935) |
| 11 | * Coalesce stack slots! |
| 12 | * Implement smarter constant generation for binops with large immediates. |
Rafael Espindola | 7564549 | 2006-09-22 11:36:17 +0000 | [diff] [blame] | 13 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 14 | * Consider materializing FP constants like 0.0f and 1.0f using integer |
| 15 | immediate instructions then copy to FPU. Slower than load into FPU? |
Rafael Espindola | 7564549 | 2006-09-22 11:36:17 +0000 | [diff] [blame] | 16 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 17 | //===---------------------------------------------------------------------===// |
Rafael Espindola | 7564549 | 2006-09-22 11:36:17 +0000 | [diff] [blame] | 18 | |
Chris Lattner | 93305bc | 2007-04-20 20:18:43 +0000 | [diff] [blame] | 19 | Crazy idea: Consider code that uses lots of 8-bit or 16-bit values. By the |
| 20 | time regalloc happens, these values are now in a 32-bit register, usually with |
| 21 | the top-bits known to be sign or zero extended. If spilled, we should be able |
| 22 | to spill these to a 8-bit or 16-bit stack slot, zero or sign extending as part |
| 23 | of the reload. |
| 24 | |
| 25 | Doing this reduces the size of the stack frame (important for thumb etc), and |
| 26 | also increases the likelihood that we will be able to reload multiple values |
| 27 | from the stack with a single load. |
| 28 | |
| 29 | //===---------------------------------------------------------------------===// |
| 30 | |
Dale Johannesen | f1b214d | 2007-02-28 18:41:23 +0000 | [diff] [blame] | 31 | The constant island pass is in good shape. Some cleanups might be desirable, |
| 32 | but there is unlikely to be much improvement in the generated code. |
Rafael Espindola | 7564549 | 2006-09-22 11:36:17 +0000 | [diff] [blame] | 33 | |
Dale Johannesen | f1b214d | 2007-02-28 18:41:23 +0000 | [diff] [blame] | 34 | 1. There may be some advantage to trying to be smarter about the initial |
Dale Johannesen | 88e37ae | 2007-02-23 05:02:36 +0000 | [diff] [blame] | 35 | placement, rather than putting everything at the end. |
Rafael Espindola | 7564549 | 2006-09-22 11:36:17 +0000 | [diff] [blame] | 36 | |
Dale Johannesen | 9118dbc | 2007-04-30 00:32:06 +0000 | [diff] [blame] | 37 | 2. There might be some compile-time efficiency to be had by representing |
Dale Johannesen | f1b214d | 2007-02-28 18:41:23 +0000 | [diff] [blame] | 38 | consecutive islands as a single block rather than multiple blocks. |
| 39 | |
Dale Johannesen | 9118dbc | 2007-04-30 00:32:06 +0000 | [diff] [blame] | 40 | 3. Use a priority queue to sort constant pool users in inverse order of |
Evan Cheng | 1a9da0d | 2007-03-09 19:46:06 +0000 | [diff] [blame] | 41 | position so we always process the one closed to the end of functions |
| 42 | first. This may simply CreateNewWater. |
| 43 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 44 | //===---------------------------------------------------------------------===// |
Rafael Espindola | 7564549 | 2006-09-22 11:36:17 +0000 | [diff] [blame] | 45 | |
Evan Cheng | c608ff2 | 2007-07-10 21:49:47 +0000 | [diff] [blame] | 46 | Eliminate copysign custom expansion. We are still generating crappy code with |
| 47 | default expansion + if-conversion. |
Rafael Espindola | 7564549 | 2006-09-22 11:36:17 +0000 | [diff] [blame] | 48 | |
Evan Cheng | c608ff2 | 2007-07-10 21:49:47 +0000 | [diff] [blame] | 49 | //===---------------------------------------------------------------------===// |
Rafael Espindola | 4dfab98 | 2006-12-11 23:56:10 +0000 | [diff] [blame] | 50 | |
Evan Cheng | c608ff2 | 2007-07-10 21:49:47 +0000 | [diff] [blame] | 51 | Eliminate one instruction from: |
Chris Lattner | 2d1222c | 2007-02-02 04:36:46 +0000 | [diff] [blame] | 52 | |
| 53 | define i32 @_Z6slow4bii(i32 %x, i32 %y) { |
| 54 | %tmp = icmp sgt i32 %x, %y |
| 55 | %retval = select i1 %tmp, i32 %x, i32 %y |
| 56 | ret i32 %retval |
| 57 | } |
| 58 | |
| 59 | __Z6slow4bii: |
| 60 | cmp r0, r1 |
| 61 | movgt r1, r0 |
| 62 | mov r0, r1 |
| 63 | bx lr |
Evan Cheng | c608ff2 | 2007-07-10 21:49:47 +0000 | [diff] [blame] | 64 | => |
| 65 | |
| 66 | __Z6slow4bii: |
| 67 | cmp r0, r1 |
| 68 | movle r0, r1 |
| 69 | bx lr |
Chris Lattner | 2d1222c | 2007-02-02 04:36:46 +0000 | [diff] [blame] | 70 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 71 | //===---------------------------------------------------------------------===// |
Rafael Espindola | 4dfab98 | 2006-12-11 23:56:10 +0000 | [diff] [blame] | 72 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 73 | Implement long long "X-3" with instructions that fold the immediate in. These |
| 74 | were disabled due to badness with the ARM carry flag on subtracts. |
Rafael Espindola | 4dfab98 | 2006-12-11 23:56:10 +0000 | [diff] [blame] | 75 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 76 | //===---------------------------------------------------------------------===// |
Rafael Espindola | 4dfab98 | 2006-12-11 23:56:10 +0000 | [diff] [blame] | 77 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 78 | We currently compile abs: |
| 79 | int foo(int p) { return p < 0 ? -p : p; } |
Rafael Espindola | 4dfab98 | 2006-12-11 23:56:10 +0000 | [diff] [blame] | 80 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 81 | into: |
Rafael Espindola | cd71da5 | 2006-10-03 17:27:58 +0000 | [diff] [blame] | 82 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 83 | _foo: |
| 84 | rsb r1, r0, #0 |
| 85 | cmn r0, #1 |
| 86 | movgt r1, r0 |
| 87 | mov r0, r1 |
| 88 | bx lr |
Rafael Espindola | cd71da5 | 2006-10-03 17:27:58 +0000 | [diff] [blame] | 89 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 90 | This is very, uh, literal. This could be a 3 operation sequence: |
| 91 | t = (p sra 31); |
| 92 | res = (p xor t)-t |
Rafael Espindola | 5af3a68 | 2006-10-09 14:18:33 +0000 | [diff] [blame] | 93 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 94 | Which would be better. This occurs in png decode. |
Rafael Espindola | 5af3a68 | 2006-10-09 14:18:33 +0000 | [diff] [blame] | 95 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 96 | //===---------------------------------------------------------------------===// |
| 97 | |
| 98 | More load / store optimizations: |
| 99 | 1) Look past instructions without side-effects (not load, store, branch, etc.) |
| 100 | when forming the list of loads / stores to optimize. |
| 101 | |
| 102 | 2) Smarter register allocation? |
| 103 | We are probably missing some opportunities to use ldm / stm. Consider: |
| 104 | |
| 105 | ldr r5, [r0] |
| 106 | ldr r4, [r0, #4] |
| 107 | |
| 108 | This cannot be merged into a ldm. Perhaps we will need to do the transformation |
| 109 | before register allocation. Then teach the register allocator to allocate a |
| 110 | chunk of consecutive registers. |
| 111 | |
| 112 | 3) Better representation for block transfer? This is from Olden/power: |
| 113 | |
| 114 | fldd d0, [r4] |
| 115 | fstd d0, [r4, #+32] |
| 116 | fldd d0, [r4, #+8] |
| 117 | fstd d0, [r4, #+40] |
| 118 | fldd d0, [r4, #+16] |
| 119 | fstd d0, [r4, #+48] |
| 120 | fldd d0, [r4, #+24] |
| 121 | fstd d0, [r4, #+56] |
| 122 | |
| 123 | If we can spare the registers, it would be better to use fldm and fstm here. |
| 124 | Need major register allocator enhancement though. |
| 125 | |
| 126 | 4) Can we recognize the relative position of constantpool entries? i.e. Treat |
| 127 | |
| 128 | ldr r0, LCPI17_3 |
| 129 | ldr r1, LCPI17_4 |
| 130 | ldr r2, LCPI17_5 |
| 131 | |
| 132 | as |
| 133 | ldr r0, LCPI17 |
| 134 | ldr r1, LCPI17+4 |
| 135 | ldr r2, LCPI17+8 |
| 136 | |
| 137 | Then the ldr's can be combined into a single ldm. See Olden/power. |
| 138 | |
| 139 | Note for ARM v4 gcc uses ldmia to load a pair of 32-bit values to represent a |
| 140 | double 64-bit FP constant: |
| 141 | |
| 142 | adr r0, L6 |
| 143 | ldmia r0, {r0-r1} |
| 144 | |
| 145 | .align 2 |
| 146 | L6: |
| 147 | .long -858993459 |
| 148 | .long 1074318540 |
| 149 | |
| 150 | 5) Can we make use of ldrd and strd? Instead of generating ldm / stm, use |
| 151 | ldrd/strd instead if there are only two destination registers that form an |
| 152 | odd/even pair. However, we probably would pay a penalty if the address is not |
| 153 | aligned on 8-byte boundary. This requires more information on load / store |
| 154 | nodes (and MI's?) then we currently carry. |
| 155 | |
Dale Johannesen | 818c085 | 2007-03-09 19:18:59 +0000 | [diff] [blame] | 156 | 6) struct copies appear to be done field by field |
Dale Johannesen | a6bc6fc | 2007-03-09 17:58:17 +0000 | [diff] [blame] | 157 | instead of by words, at least sometimes: |
| 158 | |
| 159 | struct foo { int x; short s; char c1; char c2; }; |
| 160 | void cpy(struct foo*a, struct foo*b) { *a = *b; } |
| 161 | |
| 162 | llvm code (-O2) |
| 163 | ldrb r3, [r1, #+6] |
| 164 | ldr r2, [r1] |
| 165 | ldrb r12, [r1, #+7] |
| 166 | ldrh r1, [r1, #+4] |
| 167 | str r2, [r0] |
| 168 | strh r1, [r0, #+4] |
| 169 | strb r3, [r0, #+6] |
| 170 | strb r12, [r0, #+7] |
| 171 | gcc code (-O2) |
| 172 | ldmia r1, {r1-r2} |
| 173 | stmia r0, {r1-r2} |
| 174 | |
| 175 | In this benchmark poor handling of aggregate copies has shown up as |
| 176 | having a large effect on size, and possibly speed as well (we don't have |
| 177 | a good way to measure on ARM). |
| 178 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 179 | //===---------------------------------------------------------------------===// |
| 180 | |
| 181 | * Consider this silly example: |
| 182 | |
| 183 | double bar(double x) { |
| 184 | double r = foo(3.1); |
| 185 | return x+r; |
| 186 | } |
| 187 | |
| 188 | _bar: |
Chris Lattner | a3f61df | 2007-11-27 22:41:52 +0000 | [diff] [blame] | 189 | stmfd sp!, {r4, r5, r7, lr} |
| 190 | add r7, sp, #8 |
| 191 | mov r4, r0 |
| 192 | mov r5, r1 |
| 193 | fldd d0, LCPI1_0 |
| 194 | fmrrd r0, r1, d0 |
| 195 | bl _foo |
| 196 | fmdrr d0, r4, r5 |
| 197 | fmsr s2, r0 |
| 198 | fsitod d1, s2 |
| 199 | faddd d0, d1, d0 |
| 200 | fmrrd r0, r1, d0 |
| 201 | ldmfd sp!, {r4, r5, r7, pc} |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 202 | |
| 203 | Ignore the prologue and epilogue stuff for a second. Note |
| 204 | mov r4, r0 |
| 205 | mov r5, r1 |
| 206 | the copys to callee-save registers and the fact they are only being used by the |
| 207 | fmdrr instruction. It would have been better had the fmdrr been scheduled |
| 208 | before the call and place the result in a callee-save DPR register. The two |
| 209 | mov ops would not have been necessary. |
| 210 | |
| 211 | //===---------------------------------------------------------------------===// |
| 212 | |
| 213 | Calling convention related stuff: |
| 214 | |
| 215 | * gcc's parameter passing implementation is terrible and we suffer as a result: |
| 216 | |
| 217 | e.g. |
| 218 | struct s { |
| 219 | double d1; |
| 220 | int s1; |
| 221 | }; |
| 222 | |
| 223 | void foo(struct s S) { |
| 224 | printf("%g, %d\n", S.d1, S.s1); |
| 225 | } |
| 226 | |
| 227 | 'S' is passed via registers r0, r1, r2. But gcc stores them to the stack, and |
| 228 | then reload them to r1, r2, and r3 before issuing the call (r0 contains the |
| 229 | address of the format string): |
| 230 | |
| 231 | stmfd sp!, {r7, lr} |
| 232 | add r7, sp, #0 |
| 233 | sub sp, sp, #12 |
| 234 | stmia sp, {r0, r1, r2} |
| 235 | ldmia sp, {r1-r2} |
| 236 | ldr r0, L5 |
| 237 | ldr r3, [sp, #8] |
| 238 | L2: |
| 239 | add r0, pc, r0 |
| 240 | bl L_printf$stub |
| 241 | |
| 242 | Instead of a stmia, ldmia, and a ldr, wouldn't it be better to do three moves? |
| 243 | |
| 244 | * Return an aggregate type is even worse: |
| 245 | |
| 246 | e.g. |
| 247 | struct s foo(void) { |
| 248 | struct s S = {1.1, 2}; |
| 249 | return S; |
| 250 | } |
| 251 | |
| 252 | mov ip, r0 |
| 253 | ldr r0, L5 |
| 254 | sub sp, sp, #12 |
| 255 | L2: |
| 256 | add r0, pc, r0 |
| 257 | @ lr needed for prologue |
| 258 | ldmia r0, {r0, r1, r2} |
| 259 | stmia sp, {r0, r1, r2} |
| 260 | stmia ip, {r0, r1, r2} |
| 261 | mov r0, ip |
| 262 | add sp, sp, #12 |
| 263 | bx lr |
| 264 | |
| 265 | r0 (and later ip) is the hidden parameter from caller to store the value in. The |
| 266 | first ldmia loads the constants into r0, r1, r2. The last stmia stores r0, r1, |
| 267 | r2 into the address passed in. However, there is one additional stmia that |
| 268 | stores r0, r1, and r2 to some stack location. The store is dead. |
| 269 | |
| 270 | The llvm-gcc generated code looks like this: |
| 271 | |
| 272 | csretcc void %foo(%struct.s* %agg.result) { |
Rafael Espindola | 5af3a68 | 2006-10-09 14:18:33 +0000 | [diff] [blame] | 273 | entry: |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 274 | %S = alloca %struct.s, align 4 ; <%struct.s*> [#uses=1] |
| 275 | %memtmp = alloca %struct.s ; <%struct.s*> [#uses=1] |
| 276 | cast %struct.s* %S to sbyte* ; <sbyte*>:0 [#uses=2] |
| 277 | call void %llvm.memcpy.i32( sbyte* %0, sbyte* cast ({ double, int }* %C.0.904 to sbyte*), uint 12, uint 4 ) |
| 278 | cast %struct.s* %agg.result to sbyte* ; <sbyte*>:1 [#uses=2] |
| 279 | call void %llvm.memcpy.i32( sbyte* %1, sbyte* %0, uint 12, uint 0 ) |
| 280 | cast %struct.s* %memtmp to sbyte* ; <sbyte*>:2 [#uses=1] |
| 281 | call void %llvm.memcpy.i32( sbyte* %2, sbyte* %1, uint 12, uint 0 ) |
Rafael Espindola | 5af3a68 | 2006-10-09 14:18:33 +0000 | [diff] [blame] | 282 | ret void |
| 283 | } |
| 284 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 285 | llc ends up issuing two memcpy's (the first memcpy becomes 3 loads from |
| 286 | constantpool). Perhaps we should 1) fix llvm-gcc so the memcpy is translated |
| 287 | into a number of load and stores, or 2) custom lower memcpy (of small size) to |
| 288 | be ldmia / stmia. I think option 2 is better but the current register |
| 289 | allocator cannot allocate a chunk of registers at a time. |
Rafael Espindola | 5af3a68 | 2006-10-09 14:18:33 +0000 | [diff] [blame] | 290 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 291 | A feasible temporary solution is to use specific physical registers at the |
| 292 | lowering time for small (<= 4 words?) transfer size. |
Rafael Espindola | 5af3a68 | 2006-10-09 14:18:33 +0000 | [diff] [blame] | 293 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 294 | * ARM CSRet calling convention requires the hidden argument to be returned by |
| 295 | the callee. |
Rafael Espindola | bec2e38 | 2006-10-16 16:33:29 +0000 | [diff] [blame] | 296 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 297 | //===---------------------------------------------------------------------===// |
Rafael Espindola | bec2e38 | 2006-10-16 16:33:29 +0000 | [diff] [blame] | 298 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 299 | We can definitely do a better job on BB placements to eliminate some branches. |
| 300 | It's very common to see llvm generated assembly code that looks like this: |
Rafael Espindola | 82c678b | 2006-10-16 17:17:22 +0000 | [diff] [blame] | 301 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 302 | LBB3: |
| 303 | ... |
| 304 | LBB4: |
| 305 | ... |
| 306 | beq LBB3 |
| 307 | b LBB2 |
Rafael Espindola | 82c678b | 2006-10-16 17:17:22 +0000 | [diff] [blame] | 308 | |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 309 | If BB4 is the only predecessor of BB3, then we can emit BB3 after BB4. We can |
| 310 | then eliminate beq and and turn the unconditional branch to LBB2 to a bne. |
| 311 | |
| 312 | See McCat/18-imp/ComputeBoundingBoxes for an example. |
| 313 | |
| 314 | //===---------------------------------------------------------------------===// |
| 315 | |
Dale Johannesen | a6bc6fc | 2007-03-09 17:58:17 +0000 | [diff] [blame] | 316 | Register scavenging is now implemented. The example in the previous version |
| 317 | of this document produces optimal code at -O2. |
Evan Cheng | a8e2989 | 2007-01-19 07:51:42 +0000 | [diff] [blame] | 318 | |
| 319 | //===---------------------------------------------------------------------===// |
| 320 | |
| 321 | Pre-/post- indexed load / stores: |
| 322 | |
| 323 | 1) We should not make the pre/post- indexed load/store transform if the base ptr |
| 324 | is guaranteed to be live beyond the load/store. This can happen if the base |
| 325 | ptr is live out of the block we are performing the optimization. e.g. |
| 326 | |
| 327 | mov r1, r2 |
| 328 | ldr r3, [r1], #4 |
| 329 | ... |
| 330 | |
| 331 | vs. |
| 332 | |
| 333 | ldr r3, [r2] |
| 334 | add r1, r2, #4 |
| 335 | ... |
| 336 | |
| 337 | In most cases, this is just a wasted optimization. However, sometimes it can |
| 338 | negatively impact the performance because two-address code is more restrictive |
| 339 | when it comes to scheduling. |
| 340 | |
| 341 | Unfortunately, liveout information is currently unavailable during DAG combine |
| 342 | time. |
| 343 | |
| 344 | 2) Consider spliting a indexed load / store into a pair of add/sub + load/store |
| 345 | to solve #1 (in TwoAddressInstructionPass.cpp). |
| 346 | |
| 347 | 3) Enhance LSR to generate more opportunities for indexed ops. |
| 348 | |
| 349 | 4) Once we added support for multiple result patterns, write indexed loads |
| 350 | patterns instead of C++ instruction selection code. |
| 351 | |
| 352 | 5) Use FLDM / FSTM to emulate indexed FP load / store. |
| 353 | |
| 354 | //===---------------------------------------------------------------------===// |
| 355 | |
| 356 | We should add i64 support to take advantage of the 64-bit load / stores. |
| 357 | We can add a pseudo i64 register class containing pseudo registers that are |
| 358 | register pairs. All other ops (e.g. add, sub) would be expanded as usual. |
| 359 | |
| 360 | We need to add pseudo instructions (i.e. gethi / getlo) to extract i32 registers |
| 361 | from the i64 register. These are single moves which can be eliminated if the |
| 362 | destination register is a sub-register of the source. We should implement proper |
| 363 | subreg support in the register allocator to coalesce these away. |
| 364 | |
| 365 | There are other minor issues such as multiple instructions for a spill / restore |
| 366 | / move. |
| 367 | |
| 368 | //===---------------------------------------------------------------------===// |
| 369 | |
| 370 | Implement support for some more tricky ways to materialize immediates. For |
| 371 | example, to get 0xffff8000, we can use: |
| 372 | |
| 373 | mov r9, #&3f8000 |
| 374 | sub r9, r9, #&400000 |
| 375 | |
| 376 | //===---------------------------------------------------------------------===// |
| 377 | |
| 378 | We sometimes generate multiple add / sub instructions to update sp in prologue |
| 379 | and epilogue if the inc / dec value is too large to fit in a single immediate |
| 380 | operand. In some cases, perhaps it might be better to load the value from a |
| 381 | constantpool instead. |
| 382 | |
| 383 | //===---------------------------------------------------------------------===// |
| 384 | |
| 385 | GCC generates significantly better code for this function. |
| 386 | |
| 387 | int foo(int StackPtr, unsigned char *Line, unsigned char *Stack, int LineLen) { |
| 388 | int i = 0; |
| 389 | |
| 390 | if (StackPtr != 0) { |
| 391 | while (StackPtr != 0 && i < (((LineLen) < (32768))? (LineLen) : (32768))) |
| 392 | Line[i++] = Stack[--StackPtr]; |
| 393 | if (LineLen > 32768) |
| 394 | { |
| 395 | while (StackPtr != 0 && i < LineLen) |
| 396 | { |
| 397 | i++; |
| 398 | --StackPtr; |
| 399 | } |
| 400 | } |
| 401 | } |
| 402 | return StackPtr; |
| 403 | } |
| 404 | |
| 405 | //===---------------------------------------------------------------------===// |
| 406 | |
| 407 | This should compile to the mlas instruction: |
| 408 | int mlas(int x, int y, int z) { return ((x * y + z) < 0) ? 7 : 13; } |
| 409 | |
| 410 | //===---------------------------------------------------------------------===// |
| 411 | |
| 412 | At some point, we should triage these to see if they still apply to us: |
| 413 | |
| 414 | http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19598 |
| 415 | http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18560 |
| 416 | http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27016 |
| 417 | |
| 418 | http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11831 |
| 419 | http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11826 |
| 420 | http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11825 |
| 421 | http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11824 |
| 422 | http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11823 |
| 423 | http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11820 |
| 424 | http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10982 |
| 425 | |
| 426 | http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10242 |
| 427 | http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9831 |
| 428 | http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9760 |
| 429 | http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9759 |
| 430 | http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9703 |
| 431 | http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9702 |
| 432 | http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9663 |
| 433 | |
| 434 | http://www.inf.u-szeged.hu/gcc-arm/ |
| 435 | http://citeseer.ist.psu.edu/debus04linktime.html |
| 436 | |
| 437 | //===---------------------------------------------------------------------===// |
Evan Cheng | 2265b49 | 2007-03-09 19:34:51 +0000 | [diff] [blame] | 438 | |
Dale Johannesen | 818c085 | 2007-03-09 19:18:59 +0000 | [diff] [blame] | 439 | gcc generates smaller code for this function at -O2 or -Os: |
Dale Johannesen | a6bc6fc | 2007-03-09 17:58:17 +0000 | [diff] [blame] | 440 | |
| 441 | void foo(signed char* p) { |
| 442 | if (*p == 3) |
| 443 | bar(); |
| 444 | else if (*p == 4) |
| 445 | baz(); |
| 446 | else if (*p == 5) |
| 447 | quux(); |
| 448 | } |
| 449 | |
| 450 | llvm decides it's a good idea to turn the repeated if...else into a |
| 451 | binary tree, as if it were a switch; the resulting code requires -1 |
| 452 | compare-and-branches when *p<=2 or *p==5, the same number if *p==4 |
| 453 | or *p>6, and +1 if *p==3. So it should be a speed win |
| 454 | (on balance). However, the revised code is larger, with 4 conditional |
| 455 | branches instead of 3. |
| 456 | |
| 457 | More seriously, there is a byte->word extend before |
| 458 | each comparison, where there should be only one, and the condition codes |
| 459 | are not remembered when the same two values are compared twice. |
| 460 | |
Evan Cheng | 2265b49 | 2007-03-09 19:34:51 +0000 | [diff] [blame] | 461 | //===---------------------------------------------------------------------===// |
| 462 | |
| 463 | More register scavenging work: |
| 464 | |
| 465 | 1. Use the register scavenger to track frame index materialized into registers |
| 466 | (those that do not fit in addressing modes) to allow reuse in the same BB. |
| 467 | 2. Finish scavenging for Thumb. |
| 468 | 3. We know some spills and restores are unnecessary. The issue is once live |
| 469 | intervals are merged, they are not never split. So every def is spilled |
| 470 | and every use requires a restore if the register allocator decides the |
| 471 | resulting live interval is not assigned a physical register. It may be |
| 472 | possible (with the help of the scavenger) to turn some spill / restore |
| 473 | pairs into register copies. |
Evan Cheng | 44f4fca | 2007-03-09 19:35:33 +0000 | [diff] [blame] | 474 | |
| 475 | //===---------------------------------------------------------------------===// |
| 476 | |
Evan Cheng | a125cbe | 2007-03-20 22:32:39 +0000 | [diff] [blame] | 477 | More LSR enhancements possible: |
| 478 | |
| 479 | 1. Teach LSR about pre- and post- indexed ops to allow iv increment be merged |
| 480 | in a load / store. |
| 481 | 2. Allow iv reuse even when a type conversion is required. For example, i8 |
| 482 | and i32 load / store addressing modes are identical. |
Chris Lattner | 3c30d10 | 2007-04-17 18:03:00 +0000 | [diff] [blame] | 483 | |
| 484 | |
| 485 | //===---------------------------------------------------------------------===// |
| 486 | |
| 487 | This: |
| 488 | |
| 489 | int foo(int a, int b, int c, int d) { |
| 490 | long long acc = (long long)a * (long long)b; |
| 491 | acc += (long long)c * (long long)d; |
| 492 | return (int)(acc >> 32); |
| 493 | } |
| 494 | |
| 495 | Should compile to use SMLAL (Signed Multiply Accumulate Long) which multiplies |
| 496 | two signed 32-bit values to produce a 64-bit value, and accumulates this with |
| 497 | a 64-bit value. |
| 498 | |
Chris Lattner | a3f61df | 2007-11-27 22:41:52 +0000 | [diff] [blame] | 499 | We currently get this with both v4 and v6: |
Chris Lattner | 3c30d10 | 2007-04-17 18:03:00 +0000 | [diff] [blame] | 500 | |
| 501 | _foo: |
Chris Lattner | a3f61df | 2007-11-27 22:41:52 +0000 | [diff] [blame] | 502 | smull r1, r0, r1, r0 |
| 503 | smull r3, r2, r3, r2 |
| 504 | adds r3, r3, r1 |
| 505 | adc r0, r2, r0 |
Chris Lattner | 3c30d10 | 2007-04-17 18:03:00 +0000 | [diff] [blame] | 506 | bx lr |
| 507 | |
Chris Lattner | 3c30d10 | 2007-04-17 18:03:00 +0000 | [diff] [blame] | 508 | //===---------------------------------------------------------------------===// |
Chris Lattner | bf8ae84 | 2007-09-10 21:43:18 +0000 | [diff] [blame] | 509 | |
| 510 | This: |
| 511 | #include <algorithm> |
| 512 | std::pair<unsigned, bool> full_add(unsigned a, unsigned b) |
| 513 | { return std::make_pair(a + b, a + b < a); } |
| 514 | bool no_overflow(unsigned a, unsigned b) |
| 515 | { return !full_add(a, b).second; } |
| 516 | |
| 517 | Should compile to: |
| 518 | |
| 519 | _Z8full_addjj: |
| 520 | adds r2, r1, r2 |
| 521 | movcc r1, #0 |
| 522 | movcs r1, #1 |
| 523 | str r2, [r0, #0] |
| 524 | strb r1, [r0, #4] |
| 525 | mov pc, lr |
| 526 | |
| 527 | _Z11no_overflowjj: |
| 528 | cmn r0, r1 |
| 529 | movcs r0, #0 |
| 530 | movcc r0, #1 |
| 531 | mov pc, lr |
| 532 | |
| 533 | not: |
| 534 | |
| 535 | __Z8full_addjj: |
| 536 | add r3, r2, r1 |
| 537 | str r3, [r0] |
| 538 | mov r2, #1 |
| 539 | mov r12, #0 |
| 540 | cmp r3, r1 |
| 541 | movlo r12, r2 |
| 542 | str r12, [r0, #+4] |
| 543 | bx lr |
| 544 | __Z11no_overflowjj: |
| 545 | add r3, r1, r0 |
| 546 | mov r2, #1 |
| 547 | mov r1, #0 |
| 548 | cmp r3, r0 |
| 549 | movhs r1, r2 |
| 550 | mov r0, r1 |
| 551 | bx lr |
| 552 | |
| 553 | //===---------------------------------------------------------------------===// |
Chris Lattner | 3a7c33a | 2007-10-19 03:29:26 +0000 | [diff] [blame] | 554 | |