blob: 1fb775064efeebdbaef3db483fd5852a90043494 [file] [log] [blame]
Rafael Espindolaf4d40052006-08-22 12:22:46 +00001//===---------------------------------------------------------------------===//
2// Random ideas for the ARM backend.
3//===---------------------------------------------------------------------===//
4
Evan Chenga8e29892007-01-19 07:51:42 +00005Reimplement 'select' in terms of 'SEL'.
Rafael Espindolaf4d40052006-08-22 12:22:46 +00006
Evan Chenga8e29892007-01-19 07:51:42 +00007* We would really like to support UXTAB16, but we need to prove that the
8 add doesn't need to overflow between the two 16-bit chunks.
Rafael Espindola75645492006-09-22 11:36:17 +00009
Evan Chenga8e29892007-01-19 07:51:42 +000010* Implement pre/post increment support. (e.g. PR935)
11* Coalesce stack slots!
12* Implement smarter constant generation for binops with large immediates.
Rafael Espindola75645492006-09-22 11:36:17 +000013
Evan Chenga8e29892007-01-19 07:51:42 +000014* Consider materializing FP constants like 0.0f and 1.0f using integer
15 immediate instructions then copy to FPU. Slower than load into FPU?
Rafael Espindola75645492006-09-22 11:36:17 +000016
Evan Chenga8e29892007-01-19 07:51:42 +000017//===---------------------------------------------------------------------===//
Rafael Espindola75645492006-09-22 11:36:17 +000018
Chris Lattner93305bc2007-04-20 20:18:43 +000019Crazy idea: Consider code that uses lots of 8-bit or 16-bit values. By the
20time regalloc happens, these values are now in a 32-bit register, usually with
21the top-bits known to be sign or zero extended. If spilled, we should be able
22to spill these to a 8-bit or 16-bit stack slot, zero or sign extending as part
23of the reload.
24
25Doing this reduces the size of the stack frame (important for thumb etc), and
26also increases the likelihood that we will be able to reload multiple values
27from the stack with a single load.
28
29//===---------------------------------------------------------------------===//
30
Dale Johannesenf1b214d2007-02-28 18:41:23 +000031The constant island pass is in good shape. Some cleanups might be desirable,
32but there is unlikely to be much improvement in the generated code.
Rafael Espindola75645492006-09-22 11:36:17 +000033
Dale Johannesenf1b214d2007-02-28 18:41:23 +0000341. There may be some advantage to trying to be smarter about the initial
Dale Johannesen88e37ae2007-02-23 05:02:36 +000035placement, rather than putting everything at the end.
Rafael Espindola75645492006-09-22 11:36:17 +000036
Dale Johannesen9118dbc2007-04-30 00:32:06 +0000372. There might be some compile-time efficiency to be had by representing
Dale Johannesenf1b214d2007-02-28 18:41:23 +000038consecutive islands as a single block rather than multiple blocks.
39
Dale Johannesen9118dbc2007-04-30 00:32:06 +0000403. Use a priority queue to sort constant pool users in inverse order of
Evan Cheng1a9da0d2007-03-09 19:46:06 +000041 position so we always process the one closed to the end of functions
42 first. This may simply CreateNewWater.
43
Evan Chenga8e29892007-01-19 07:51:42 +000044//===---------------------------------------------------------------------===//
Rafael Espindola75645492006-09-22 11:36:17 +000045
Evan Chengc608ff22007-07-10 21:49:47 +000046Eliminate copysign custom expansion. We are still generating crappy code with
47default expansion + if-conversion.
Rafael Espindola75645492006-09-22 11:36:17 +000048
Evan Chengc608ff22007-07-10 21:49:47 +000049//===---------------------------------------------------------------------===//
Rafael Espindola4dfab982006-12-11 23:56:10 +000050
Evan Chengc608ff22007-07-10 21:49:47 +000051Eliminate one instruction from:
Chris Lattner2d1222c2007-02-02 04:36:46 +000052
53define i32 @_Z6slow4bii(i32 %x, i32 %y) {
54 %tmp = icmp sgt i32 %x, %y
55 %retval = select i1 %tmp, i32 %x, i32 %y
56 ret i32 %retval
57}
58
59__Z6slow4bii:
60 cmp r0, r1
61 movgt r1, r0
62 mov r0, r1
63 bx lr
Evan Chengc608ff22007-07-10 21:49:47 +000064=>
65
66__Z6slow4bii:
67 cmp r0, r1
68 movle r0, r1
69 bx lr
Chris Lattner2d1222c2007-02-02 04:36:46 +000070
Evan Chenga8e29892007-01-19 07:51:42 +000071//===---------------------------------------------------------------------===//
Rafael Espindola4dfab982006-12-11 23:56:10 +000072
Evan Chenga8e29892007-01-19 07:51:42 +000073Implement long long "X-3" with instructions that fold the immediate in. These
74were disabled due to badness with the ARM carry flag on subtracts.
Rafael Espindola4dfab982006-12-11 23:56:10 +000075
Evan Chenga8e29892007-01-19 07:51:42 +000076//===---------------------------------------------------------------------===//
Rafael Espindola4dfab982006-12-11 23:56:10 +000077
Evan Chenga8e29892007-01-19 07:51:42 +000078We currently compile abs:
79int foo(int p) { return p < 0 ? -p : p; }
Rafael Espindola4dfab982006-12-11 23:56:10 +000080
Evan Chenga8e29892007-01-19 07:51:42 +000081into:
Rafael Espindolacd71da52006-10-03 17:27:58 +000082
Evan Chenga8e29892007-01-19 07:51:42 +000083_foo:
84 rsb r1, r0, #0
85 cmn r0, #1
86 movgt r1, r0
87 mov r0, r1
88 bx lr
Rafael Espindolacd71da52006-10-03 17:27:58 +000089
Evan Chenga8e29892007-01-19 07:51:42 +000090This is very, uh, literal. This could be a 3 operation sequence:
91 t = (p sra 31);
92 res = (p xor t)-t
Rafael Espindola5af3a682006-10-09 14:18:33 +000093
Evan Chenga8e29892007-01-19 07:51:42 +000094Which would be better. This occurs in png decode.
Rafael Espindola5af3a682006-10-09 14:18:33 +000095
Evan Chenga8e29892007-01-19 07:51:42 +000096//===---------------------------------------------------------------------===//
97
98More load / store optimizations:
991) Look past instructions without side-effects (not load, store, branch, etc.)
100 when forming the list of loads / stores to optimize.
101
1022) Smarter register allocation?
103We are probably missing some opportunities to use ldm / stm. Consider:
104
105ldr r5, [r0]
106ldr r4, [r0, #4]
107
108This cannot be merged into a ldm. Perhaps we will need to do the transformation
109before register allocation. Then teach the register allocator to allocate a
110chunk of consecutive registers.
111
1123) Better representation for block transfer? This is from Olden/power:
113
114 fldd d0, [r4]
115 fstd d0, [r4, #+32]
116 fldd d0, [r4, #+8]
117 fstd d0, [r4, #+40]
118 fldd d0, [r4, #+16]
119 fstd d0, [r4, #+48]
120 fldd d0, [r4, #+24]
121 fstd d0, [r4, #+56]
122
123If we can spare the registers, it would be better to use fldm and fstm here.
124Need major register allocator enhancement though.
125
1264) Can we recognize the relative position of constantpool entries? i.e. Treat
127
128 ldr r0, LCPI17_3
129 ldr r1, LCPI17_4
130 ldr r2, LCPI17_5
131
132 as
133 ldr r0, LCPI17
134 ldr r1, LCPI17+4
135 ldr r2, LCPI17+8
136
137 Then the ldr's can be combined into a single ldm. See Olden/power.
138
139Note for ARM v4 gcc uses ldmia to load a pair of 32-bit values to represent a
140double 64-bit FP constant:
141
142 adr r0, L6
143 ldmia r0, {r0-r1}
144
145 .align 2
146L6:
147 .long -858993459
148 .long 1074318540
149
1505) Can we make use of ldrd and strd? Instead of generating ldm / stm, use
151ldrd/strd instead if there are only two destination registers that form an
152odd/even pair. However, we probably would pay a penalty if the address is not
153aligned on 8-byte boundary. This requires more information on load / store
154nodes (and MI's?) then we currently carry.
155
Dale Johannesen818c0852007-03-09 19:18:59 +00001566) struct copies appear to be done field by field
Dale Johannesena6bc6fc2007-03-09 17:58:17 +0000157instead of by words, at least sometimes:
158
159struct foo { int x; short s; char c1; char c2; };
160void cpy(struct foo*a, struct foo*b) { *a = *b; }
161
162llvm code (-O2)
163 ldrb r3, [r1, #+6]
164 ldr r2, [r1]
165 ldrb r12, [r1, #+7]
166 ldrh r1, [r1, #+4]
167 str r2, [r0]
168 strh r1, [r0, #+4]
169 strb r3, [r0, #+6]
170 strb r12, [r0, #+7]
171gcc code (-O2)
172 ldmia r1, {r1-r2}
173 stmia r0, {r1-r2}
174
175In this benchmark poor handling of aggregate copies has shown up as
176having a large effect on size, and possibly speed as well (we don't have
177a good way to measure on ARM).
178
Evan Chenga8e29892007-01-19 07:51:42 +0000179//===---------------------------------------------------------------------===//
180
181* Consider this silly example:
182
183double bar(double x) {
184 double r = foo(3.1);
185 return x+r;
186}
187
188_bar:
189 sub sp, sp, #16
190 str r4, [sp, #+12]
191 str r5, [sp, #+8]
192 str lr, [sp, #+4]
193 mov r4, r0
194 mov r5, r1
195 ldr r0, LCPI2_0
196 bl _foo
197 fmsr f0, r0
198 fcvtsd d0, f0
199 fmdrr d1, r4, r5
200 faddd d0, d0, d1
201 fmrrd r0, r1, d0
202 ldr lr, [sp, #+4]
203 ldr r5, [sp, #+8]
204 ldr r4, [sp, #+12]
205 add sp, sp, #16
206 bx lr
207
208Ignore the prologue and epilogue stuff for a second. Note
209 mov r4, r0
210 mov r5, r1
211the copys to callee-save registers and the fact they are only being used by the
212fmdrr instruction. It would have been better had the fmdrr been scheduled
213before the call and place the result in a callee-save DPR register. The two
214mov ops would not have been necessary.
215
216//===---------------------------------------------------------------------===//
217
218Calling convention related stuff:
219
220* gcc's parameter passing implementation is terrible and we suffer as a result:
221
222e.g.
223struct s {
224 double d1;
225 int s1;
226};
227
228void foo(struct s S) {
229 printf("%g, %d\n", S.d1, S.s1);
230}
231
232'S' is passed via registers r0, r1, r2. But gcc stores them to the stack, and
233then reload them to r1, r2, and r3 before issuing the call (r0 contains the
234address of the format string):
235
236 stmfd sp!, {r7, lr}
237 add r7, sp, #0
238 sub sp, sp, #12
239 stmia sp, {r0, r1, r2}
240 ldmia sp, {r1-r2}
241 ldr r0, L5
242 ldr r3, [sp, #8]
243L2:
244 add r0, pc, r0
245 bl L_printf$stub
246
247Instead of a stmia, ldmia, and a ldr, wouldn't it be better to do three moves?
248
249* Return an aggregate type is even worse:
250
251e.g.
252struct s foo(void) {
253 struct s S = {1.1, 2};
254 return S;
255}
256
257 mov ip, r0
258 ldr r0, L5
259 sub sp, sp, #12
260L2:
261 add r0, pc, r0
262 @ lr needed for prologue
263 ldmia r0, {r0, r1, r2}
264 stmia sp, {r0, r1, r2}
265 stmia ip, {r0, r1, r2}
266 mov r0, ip
267 add sp, sp, #12
268 bx lr
269
270r0 (and later ip) is the hidden parameter from caller to store the value in. The
271first ldmia loads the constants into r0, r1, r2. The last stmia stores r0, r1,
272r2 into the address passed in. However, there is one additional stmia that
273stores r0, r1, and r2 to some stack location. The store is dead.
274
275The llvm-gcc generated code looks like this:
276
277csretcc void %foo(%struct.s* %agg.result) {
Rafael Espindola5af3a682006-10-09 14:18:33 +0000278entry:
Evan Chenga8e29892007-01-19 07:51:42 +0000279 %S = alloca %struct.s, align 4 ; <%struct.s*> [#uses=1]
280 %memtmp = alloca %struct.s ; <%struct.s*> [#uses=1]
281 cast %struct.s* %S to sbyte* ; <sbyte*>:0 [#uses=2]
282 call void %llvm.memcpy.i32( sbyte* %0, sbyte* cast ({ double, int }* %C.0.904 to sbyte*), uint 12, uint 4 )
283 cast %struct.s* %agg.result to sbyte* ; <sbyte*>:1 [#uses=2]
284 call void %llvm.memcpy.i32( sbyte* %1, sbyte* %0, uint 12, uint 0 )
285 cast %struct.s* %memtmp to sbyte* ; <sbyte*>:2 [#uses=1]
286 call void %llvm.memcpy.i32( sbyte* %2, sbyte* %1, uint 12, uint 0 )
Rafael Espindola5af3a682006-10-09 14:18:33 +0000287 ret void
288}
289
Evan Chenga8e29892007-01-19 07:51:42 +0000290llc ends up issuing two memcpy's (the first memcpy becomes 3 loads from
291constantpool). Perhaps we should 1) fix llvm-gcc so the memcpy is translated
292into a number of load and stores, or 2) custom lower memcpy (of small size) to
293be ldmia / stmia. I think option 2 is better but the current register
294allocator cannot allocate a chunk of registers at a time.
Rafael Espindola5af3a682006-10-09 14:18:33 +0000295
Evan Chenga8e29892007-01-19 07:51:42 +0000296A feasible temporary solution is to use specific physical registers at the
297lowering time for small (<= 4 words?) transfer size.
Rafael Espindola5af3a682006-10-09 14:18:33 +0000298
Evan Chenga8e29892007-01-19 07:51:42 +0000299* ARM CSRet calling convention requires the hidden argument to be returned by
300the callee.
Rafael Espindolabec2e382006-10-16 16:33:29 +0000301
Evan Chenga8e29892007-01-19 07:51:42 +0000302//===---------------------------------------------------------------------===//
Rafael Espindolabec2e382006-10-16 16:33:29 +0000303
Evan Chenga8e29892007-01-19 07:51:42 +0000304We can definitely do a better job on BB placements to eliminate some branches.
305It's very common to see llvm generated assembly code that looks like this:
Rafael Espindola82c678b2006-10-16 17:17:22 +0000306
Evan Chenga8e29892007-01-19 07:51:42 +0000307LBB3:
308 ...
309LBB4:
310...
311 beq LBB3
312 b LBB2
Rafael Espindola82c678b2006-10-16 17:17:22 +0000313
Evan Chenga8e29892007-01-19 07:51:42 +0000314If BB4 is the only predecessor of BB3, then we can emit BB3 after BB4. We can
315then eliminate beq and and turn the unconditional branch to LBB2 to a bne.
316
317See McCat/18-imp/ComputeBoundingBoxes for an example.
318
319//===---------------------------------------------------------------------===//
320
Dale Johannesena6bc6fc2007-03-09 17:58:17 +0000321Register scavenging is now implemented. The example in the previous version
322of this document produces optimal code at -O2.
Evan Chenga8e29892007-01-19 07:51:42 +0000323
324//===---------------------------------------------------------------------===//
325
326Pre-/post- indexed load / stores:
327
3281) We should not make the pre/post- indexed load/store transform if the base ptr
329is guaranteed to be live beyond the load/store. This can happen if the base
330ptr is live out of the block we are performing the optimization. e.g.
331
332mov r1, r2
333ldr r3, [r1], #4
334...
335
336vs.
337
338ldr r3, [r2]
339add r1, r2, #4
340...
341
342In most cases, this is just a wasted optimization. However, sometimes it can
343negatively impact the performance because two-address code is more restrictive
344when it comes to scheduling.
345
346Unfortunately, liveout information is currently unavailable during DAG combine
347time.
348
3492) Consider spliting a indexed load / store into a pair of add/sub + load/store
350 to solve #1 (in TwoAddressInstructionPass.cpp).
351
3523) Enhance LSR to generate more opportunities for indexed ops.
353
3544) Once we added support for multiple result patterns, write indexed loads
355 patterns instead of C++ instruction selection code.
356
3575) Use FLDM / FSTM to emulate indexed FP load / store.
358
359//===---------------------------------------------------------------------===//
360
361We should add i64 support to take advantage of the 64-bit load / stores.
362We can add a pseudo i64 register class containing pseudo registers that are
363register pairs. All other ops (e.g. add, sub) would be expanded as usual.
364
365We need to add pseudo instructions (i.e. gethi / getlo) to extract i32 registers
366from the i64 register. These are single moves which can be eliminated if the
367destination register is a sub-register of the source. We should implement proper
368subreg support in the register allocator to coalesce these away.
369
370There are other minor issues such as multiple instructions for a spill / restore
371/ move.
372
373//===---------------------------------------------------------------------===//
374
375Implement support for some more tricky ways to materialize immediates. For
376example, to get 0xffff8000, we can use:
377
378mov r9, #&3f8000
379sub r9, r9, #&400000
380
381//===---------------------------------------------------------------------===//
382
383We sometimes generate multiple add / sub instructions to update sp in prologue
384and epilogue if the inc / dec value is too large to fit in a single immediate
385operand. In some cases, perhaps it might be better to load the value from a
386constantpool instead.
387
388//===---------------------------------------------------------------------===//
389
390GCC generates significantly better code for this function.
391
392int foo(int StackPtr, unsigned char *Line, unsigned char *Stack, int LineLen) {
393 int i = 0;
394
395 if (StackPtr != 0) {
396 while (StackPtr != 0 && i < (((LineLen) < (32768))? (LineLen) : (32768)))
397 Line[i++] = Stack[--StackPtr];
398 if (LineLen > 32768)
399 {
400 while (StackPtr != 0 && i < LineLen)
401 {
402 i++;
403 --StackPtr;
404 }
405 }
406 }
407 return StackPtr;
408}
409
410//===---------------------------------------------------------------------===//
411
412This should compile to the mlas instruction:
413int mlas(int x, int y, int z) { return ((x * y + z) < 0) ? 7 : 13; }
414
415//===---------------------------------------------------------------------===//
416
417At some point, we should triage these to see if they still apply to us:
418
419http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19598
420http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18560
421http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27016
422
423http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11831
424http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11826
425http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11825
426http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11824
427http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11823
428http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11820
429http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10982
430
431http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10242
432http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9831
433http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9760
434http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9759
435http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9703
436http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9702
437http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9663
438
439http://www.inf.u-szeged.hu/gcc-arm/
440http://citeseer.ist.psu.edu/debus04linktime.html
441
442//===---------------------------------------------------------------------===//
Evan Cheng2265b492007-03-09 19:34:51 +0000443
Dale Johannesen818c0852007-03-09 19:18:59 +0000444gcc generates smaller code for this function at -O2 or -Os:
Dale Johannesena6bc6fc2007-03-09 17:58:17 +0000445
446void foo(signed char* p) {
447 if (*p == 3)
448 bar();
449 else if (*p == 4)
450 baz();
451 else if (*p == 5)
452 quux();
453}
454
455llvm decides it's a good idea to turn the repeated if...else into a
456binary tree, as if it were a switch; the resulting code requires -1
457compare-and-branches when *p<=2 or *p==5, the same number if *p==4
458or *p>6, and +1 if *p==3. So it should be a speed win
459(on balance). However, the revised code is larger, with 4 conditional
460branches instead of 3.
461
462More seriously, there is a byte->word extend before
463each comparison, where there should be only one, and the condition codes
464are not remembered when the same two values are compared twice.
465
Evan Cheng2265b492007-03-09 19:34:51 +0000466//===---------------------------------------------------------------------===//
467
468More register scavenging work:
469
4701. Use the register scavenger to track frame index materialized into registers
471 (those that do not fit in addressing modes) to allow reuse in the same BB.
4722. Finish scavenging for Thumb.
4733. We know some spills and restores are unnecessary. The issue is once live
474 intervals are merged, they are not never split. So every def is spilled
475 and every use requires a restore if the register allocator decides the
476 resulting live interval is not assigned a physical register. It may be
477 possible (with the help of the scavenger) to turn some spill / restore
478 pairs into register copies.
Evan Cheng44f4fca2007-03-09 19:35:33 +0000479
480//===---------------------------------------------------------------------===//
481
Evan Chenga125cbe2007-03-20 22:32:39 +0000482More LSR enhancements possible:
483
4841. Teach LSR about pre- and post- indexed ops to allow iv increment be merged
485 in a load / store.
4862. Allow iv reuse even when a type conversion is required. For example, i8
487 and i32 load / store addressing modes are identical.
Chris Lattner3c30d102007-04-17 18:03:00 +0000488
489
490//===---------------------------------------------------------------------===//
491
492This:
493
494int foo(int a, int b, int c, int d) {
495 long long acc = (long long)a * (long long)b;
496 acc += (long long)c * (long long)d;
497 return (int)(acc >> 32);
498}
499
500Should compile to use SMLAL (Signed Multiply Accumulate Long) which multiplies
501two signed 32-bit values to produce a 64-bit value, and accumulates this with
502a 64-bit value.
503
504We currently get this with v6:
505
506_foo:
507 mul r12, r1, r0
508 smmul r1, r1, r0
509 smmul r0, r3, r2
510 mul r3, r3, r2
511 adds r3, r3, r12
512 adc r0, r0, r1
513 bx lr
514
515and this with v4:
516
517_foo:
518 stmfd sp!, {r7, lr}
519 mov r7, sp
520 mul r12, r1, r0
521 smull r0, r1, r1, r0
522 smull lr, r0, r3, r2
523 mul r3, r3, r2
524 adds r3, r3, r12
525 adc r0, r0, r1
526 ldmfd sp!, {r7, pc}
527
528This apparently occurs in real code.
529
530//===---------------------------------------------------------------------===//
Chris Lattnerbf8ae842007-09-10 21:43:18 +0000531
532This:
533 #include <algorithm>
534 std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
535 { return std::make_pair(a + b, a + b < a); }
536 bool no_overflow(unsigned a, unsigned b)
537 { return !full_add(a, b).second; }
538
539Should compile to:
540
541_Z8full_addjj:
542 adds r2, r1, r2
543 movcc r1, #0
544 movcs r1, #1
545 str r2, [r0, #0]
546 strb r1, [r0, #4]
547 mov pc, lr
548
549_Z11no_overflowjj:
550 cmn r0, r1
551 movcs r0, #0
552 movcc r0, #1
553 mov pc, lr
554
555not:
556
557__Z8full_addjj:
558 add r3, r2, r1
559 str r3, [r0]
560 mov r2, #1
561 mov r12, #0
562 cmp r3, r1
563 movlo r12, r2
564 str r12, [r0, #+4]
565 bx lr
566__Z11no_overflowjj:
567 add r3, r1, r0
568 mov r2, #1
569 mov r1, #0
570 cmp r3, r0
571 movhs r1, r2
572 mov r0, r1
573 bx lr
574
575//===---------------------------------------------------------------------===//
Chris Lattner3a7c33a2007-10-19 03:29:26 +0000576
577Easy ARM microoptimization (with -mattr=+vfp2):
578
579define i64 @i(double %X) {
580 %Y = bitcast double %X to i64
581 ret i64 %Y
582}
583
584compiles into:
585
586_i:
587 fmdrr d0, r0, r1
588 fmrrd r0, r1, d0
589 bx lr
590
591This just needs a target-specific dag combine to merge the two ARMISD nodes.
592
593
594//===---------------------------------------------------------------------===//