blob: 068c441ed737626486e58d3d0f8f8a98cbb7b240 [file] [log] [blame]
Rafael Espindolaf4d40052006-08-22 12:22:46 +00001//===---------------------------------------------------------------------===//
2// Random ideas for the ARM backend.
3//===---------------------------------------------------------------------===//
4
Evan Chenga8e29892007-01-19 07:51:42 +00005Reimplement 'select' in terms of 'SEL'.
Rafael Espindolaf4d40052006-08-22 12:22:46 +00006
Evan Chenga8e29892007-01-19 07:51:42 +00007* We would really like to support UXTAB16, but we need to prove that the
8 add doesn't need to overflow between the two 16-bit chunks.
Rafael Espindola75645492006-09-22 11:36:17 +00009
Evan Chenga8e29892007-01-19 07:51:42 +000010* Implement pre/post increment support. (e.g. PR935)
11* Coalesce stack slots!
12* Implement smarter constant generation for binops with large immediates.
Rafael Espindola75645492006-09-22 11:36:17 +000013
Evan Chenga8e29892007-01-19 07:51:42 +000014* Consider materializing FP constants like 0.0f and 1.0f using integer
15 immediate instructions then copy to FPU. Slower than load into FPU?
Rafael Espindola75645492006-09-22 11:36:17 +000016
Evan Chenga8e29892007-01-19 07:51:42 +000017//===---------------------------------------------------------------------===//
Rafael Espindola75645492006-09-22 11:36:17 +000018
Chris Lattner93305bc2007-04-20 20:18:43 +000019Crazy idea: Consider code that uses lots of 8-bit or 16-bit values. By the
20time regalloc happens, these values are now in a 32-bit register, usually with
21the top-bits known to be sign or zero extended. If spilled, we should be able
22to spill these to a 8-bit or 16-bit stack slot, zero or sign extending as part
23of the reload.
24
25Doing this reduces the size of the stack frame (important for thumb etc), and
26also increases the likelihood that we will be able to reload multiple values
27from the stack with a single load.
28
29//===---------------------------------------------------------------------===//
30
Dale Johannesenf1b214d2007-02-28 18:41:23 +000031The constant island pass is in good shape. Some cleanups might be desirable,
32but there is unlikely to be much improvement in the generated code.
Rafael Espindola75645492006-09-22 11:36:17 +000033
Dale Johannesenf1b214d2007-02-28 18:41:23 +0000341. There may be some advantage to trying to be smarter about the initial
Dale Johannesen88e37ae2007-02-23 05:02:36 +000035placement, rather than putting everything at the end.
Rafael Espindola75645492006-09-22 11:36:17 +000036
Dale Johannesen9118dbc2007-04-30 00:32:06 +0000372. There might be some compile-time efficiency to be had by representing
Dale Johannesenf1b214d2007-02-28 18:41:23 +000038consecutive islands as a single block rather than multiple blocks.
39
Dale Johannesen9118dbc2007-04-30 00:32:06 +0000403. Use a priority queue to sort constant pool users in inverse order of
Evan Cheng1a9da0d2007-03-09 19:46:06 +000041 position so we always process the one closed to the end of functions
42 first. This may simply CreateNewWater.
43
Evan Chenga8e29892007-01-19 07:51:42 +000044//===---------------------------------------------------------------------===//
Rafael Espindola75645492006-09-22 11:36:17 +000045
Evan Chengc608ff22007-07-10 21:49:47 +000046Eliminate copysign custom expansion. We are still generating crappy code with
47default expansion + if-conversion.
Rafael Espindola75645492006-09-22 11:36:17 +000048
Evan Chengc608ff22007-07-10 21:49:47 +000049//===---------------------------------------------------------------------===//
Rafael Espindola4dfab982006-12-11 23:56:10 +000050
Evan Chengc608ff22007-07-10 21:49:47 +000051Eliminate one instruction from:
Chris Lattner2d1222c2007-02-02 04:36:46 +000052
53define i32 @_Z6slow4bii(i32 %x, i32 %y) {
54 %tmp = icmp sgt i32 %x, %y
55 %retval = select i1 %tmp, i32 %x, i32 %y
56 ret i32 %retval
57}
58
59__Z6slow4bii:
60 cmp r0, r1
61 movgt r1, r0
62 mov r0, r1
63 bx lr
Evan Chengc608ff22007-07-10 21:49:47 +000064=>
65
66__Z6slow4bii:
67 cmp r0, r1
68 movle r0, r1
69 bx lr
Chris Lattner2d1222c2007-02-02 04:36:46 +000070
Evan Chenga8e29892007-01-19 07:51:42 +000071//===---------------------------------------------------------------------===//
Rafael Espindola4dfab982006-12-11 23:56:10 +000072
Evan Chenga8e29892007-01-19 07:51:42 +000073Implement long long "X-3" with instructions that fold the immediate in. These
74were disabled due to badness with the ARM carry flag on subtracts.
Rafael Espindola4dfab982006-12-11 23:56:10 +000075
Evan Chenga8e29892007-01-19 07:51:42 +000076//===---------------------------------------------------------------------===//
Rafael Espindola4dfab982006-12-11 23:56:10 +000077
Evan Chenga8e29892007-01-19 07:51:42 +000078We currently compile abs:
79int foo(int p) { return p < 0 ? -p : p; }
Rafael Espindola4dfab982006-12-11 23:56:10 +000080
Evan Chenga8e29892007-01-19 07:51:42 +000081into:
Rafael Espindolacd71da52006-10-03 17:27:58 +000082
Evan Chenga8e29892007-01-19 07:51:42 +000083_foo:
84 rsb r1, r0, #0
85 cmn r0, #1
86 movgt r1, r0
87 mov r0, r1
88 bx lr
Rafael Espindolacd71da52006-10-03 17:27:58 +000089
Evan Chenga8e29892007-01-19 07:51:42 +000090This is very, uh, literal. This could be a 3 operation sequence:
91 t = (p sra 31);
92 res = (p xor t)-t
Rafael Espindola5af3a682006-10-09 14:18:33 +000093
Evan Chenga8e29892007-01-19 07:51:42 +000094Which would be better. This occurs in png decode.
Rafael Espindola5af3a682006-10-09 14:18:33 +000095
Evan Chenga8e29892007-01-19 07:51:42 +000096//===---------------------------------------------------------------------===//
97
98More load / store optimizations:
991) Look past instructions without side-effects (not load, store, branch, etc.)
100 when forming the list of loads / stores to optimize.
101
1022) Smarter register allocation?
103We are probably missing some opportunities to use ldm / stm. Consider:
104
105ldr r5, [r0]
106ldr r4, [r0, #4]
107
108This cannot be merged into a ldm. Perhaps we will need to do the transformation
109before register allocation. Then teach the register allocator to allocate a
110chunk of consecutive registers.
111
1123) Better representation for block transfer? This is from Olden/power:
113
114 fldd d0, [r4]
115 fstd d0, [r4, #+32]
116 fldd d0, [r4, #+8]
117 fstd d0, [r4, #+40]
118 fldd d0, [r4, #+16]
119 fstd d0, [r4, #+48]
120 fldd d0, [r4, #+24]
121 fstd d0, [r4, #+56]
122
123If we can spare the registers, it would be better to use fldm and fstm here.
124Need major register allocator enhancement though.
125
1264) Can we recognize the relative position of constantpool entries? i.e. Treat
127
128 ldr r0, LCPI17_3
129 ldr r1, LCPI17_4
130 ldr r2, LCPI17_5
131
132 as
133 ldr r0, LCPI17
134 ldr r1, LCPI17+4
135 ldr r2, LCPI17+8
136
137 Then the ldr's can be combined into a single ldm. See Olden/power.
138
139Note for ARM v4 gcc uses ldmia to load a pair of 32-bit values to represent a
140double 64-bit FP constant:
141
142 adr r0, L6
143 ldmia r0, {r0-r1}
144
145 .align 2
146L6:
147 .long -858993459
148 .long 1074318540
149
1505) Can we make use of ldrd and strd? Instead of generating ldm / stm, use
151ldrd/strd instead if there are only two destination registers that form an
152odd/even pair. However, we probably would pay a penalty if the address is not
153aligned on 8-byte boundary. This requires more information on load / store
154nodes (and MI's?) then we currently carry.
155
Dale Johannesen818c0852007-03-09 19:18:59 +00001566) struct copies appear to be done field by field
Dale Johannesena6bc6fc2007-03-09 17:58:17 +0000157instead of by words, at least sometimes:
158
159struct foo { int x; short s; char c1; char c2; };
160void cpy(struct foo*a, struct foo*b) { *a = *b; }
161
162llvm code (-O2)
163 ldrb r3, [r1, #+6]
164 ldr r2, [r1]
165 ldrb r12, [r1, #+7]
166 ldrh r1, [r1, #+4]
167 str r2, [r0]
168 strh r1, [r0, #+4]
169 strb r3, [r0, #+6]
170 strb r12, [r0, #+7]
171gcc code (-O2)
172 ldmia r1, {r1-r2}
173 stmia r0, {r1-r2}
174
175In this benchmark poor handling of aggregate copies has shown up as
176having a large effect on size, and possibly speed as well (we don't have
177a good way to measure on ARM).
178
Evan Chenga8e29892007-01-19 07:51:42 +0000179//===---------------------------------------------------------------------===//
180
181* Consider this silly example:
182
183double bar(double x) {
184 double r = foo(3.1);
185 return x+r;
186}
187
188_bar:
Chris Lattnera3f61df2007-11-27 22:41:52 +0000189 stmfd sp!, {r4, r5, r7, lr}
190 add r7, sp, #8
191 mov r4, r0
192 mov r5, r1
193 fldd d0, LCPI1_0
194 fmrrd r0, r1, d0
195 bl _foo
196 fmdrr d0, r4, r5
197 fmsr s2, r0
198 fsitod d1, s2
199 faddd d0, d1, d0
200 fmrrd r0, r1, d0
201 ldmfd sp!, {r4, r5, r7, pc}
Evan Chenga8e29892007-01-19 07:51:42 +0000202
203Ignore the prologue and epilogue stuff for a second. Note
204 mov r4, r0
205 mov r5, r1
206the copys to callee-save registers and the fact they are only being used by the
207fmdrr instruction. It would have been better had the fmdrr been scheduled
208before the call and place the result in a callee-save DPR register. The two
209mov ops would not have been necessary.
210
211//===---------------------------------------------------------------------===//
212
213Calling convention related stuff:
214
215* gcc's parameter passing implementation is terrible and we suffer as a result:
216
217e.g.
218struct s {
219 double d1;
220 int s1;
221};
222
223void foo(struct s S) {
224 printf("%g, %d\n", S.d1, S.s1);
225}
226
227'S' is passed via registers r0, r1, r2. But gcc stores them to the stack, and
228then reload them to r1, r2, and r3 before issuing the call (r0 contains the
229address of the format string):
230
231 stmfd sp!, {r7, lr}
232 add r7, sp, #0
233 sub sp, sp, #12
234 stmia sp, {r0, r1, r2}
235 ldmia sp, {r1-r2}
236 ldr r0, L5
237 ldr r3, [sp, #8]
238L2:
239 add r0, pc, r0
240 bl L_printf$stub
241
242Instead of a stmia, ldmia, and a ldr, wouldn't it be better to do three moves?
243
244* Return an aggregate type is even worse:
245
246e.g.
247struct s foo(void) {
248 struct s S = {1.1, 2};
249 return S;
250}
251
252 mov ip, r0
253 ldr r0, L5
254 sub sp, sp, #12
255L2:
256 add r0, pc, r0
257 @ lr needed for prologue
258 ldmia r0, {r0, r1, r2}
259 stmia sp, {r0, r1, r2}
260 stmia ip, {r0, r1, r2}
261 mov r0, ip
262 add sp, sp, #12
263 bx lr
264
265r0 (and later ip) is the hidden parameter from caller to store the value in. The
266first ldmia loads the constants into r0, r1, r2. The last stmia stores r0, r1,
267r2 into the address passed in. However, there is one additional stmia that
268stores r0, r1, and r2 to some stack location. The store is dead.
269
270The llvm-gcc generated code looks like this:
271
272csretcc void %foo(%struct.s* %agg.result) {
Rafael Espindola5af3a682006-10-09 14:18:33 +0000273entry:
Evan Chenga8e29892007-01-19 07:51:42 +0000274 %S = alloca %struct.s, align 4 ; <%struct.s*> [#uses=1]
275 %memtmp = alloca %struct.s ; <%struct.s*> [#uses=1]
276 cast %struct.s* %S to sbyte* ; <sbyte*>:0 [#uses=2]
277 call void %llvm.memcpy.i32( sbyte* %0, sbyte* cast ({ double, int }* %C.0.904 to sbyte*), uint 12, uint 4 )
278 cast %struct.s* %agg.result to sbyte* ; <sbyte*>:1 [#uses=2]
279 call void %llvm.memcpy.i32( sbyte* %1, sbyte* %0, uint 12, uint 0 )
280 cast %struct.s* %memtmp to sbyte* ; <sbyte*>:2 [#uses=1]
281 call void %llvm.memcpy.i32( sbyte* %2, sbyte* %1, uint 12, uint 0 )
Rafael Espindola5af3a682006-10-09 14:18:33 +0000282 ret void
283}
284
Evan Chenga8e29892007-01-19 07:51:42 +0000285llc ends up issuing two memcpy's (the first memcpy becomes 3 loads from
286constantpool). Perhaps we should 1) fix llvm-gcc so the memcpy is translated
287into a number of load and stores, or 2) custom lower memcpy (of small size) to
288be ldmia / stmia. I think option 2 is better but the current register
289allocator cannot allocate a chunk of registers at a time.
Rafael Espindola5af3a682006-10-09 14:18:33 +0000290
Evan Chenga8e29892007-01-19 07:51:42 +0000291A feasible temporary solution is to use specific physical registers at the
292lowering time for small (<= 4 words?) transfer size.
Rafael Espindola5af3a682006-10-09 14:18:33 +0000293
Evan Chenga8e29892007-01-19 07:51:42 +0000294* ARM CSRet calling convention requires the hidden argument to be returned by
295the callee.
Rafael Espindolabec2e382006-10-16 16:33:29 +0000296
Evan Chenga8e29892007-01-19 07:51:42 +0000297//===---------------------------------------------------------------------===//
Rafael Espindolabec2e382006-10-16 16:33:29 +0000298
Evan Chenga8e29892007-01-19 07:51:42 +0000299We can definitely do a better job on BB placements to eliminate some branches.
300It's very common to see llvm generated assembly code that looks like this:
Rafael Espindola82c678b2006-10-16 17:17:22 +0000301
Evan Chenga8e29892007-01-19 07:51:42 +0000302LBB3:
303 ...
304LBB4:
305...
306 beq LBB3
307 b LBB2
Rafael Espindola82c678b2006-10-16 17:17:22 +0000308
Evan Chenga8e29892007-01-19 07:51:42 +0000309If BB4 is the only predecessor of BB3, then we can emit BB3 after BB4. We can
310then eliminate beq and and turn the unconditional branch to LBB2 to a bne.
311
312See McCat/18-imp/ComputeBoundingBoxes for an example.
313
314//===---------------------------------------------------------------------===//
315
Dale Johannesena6bc6fc2007-03-09 17:58:17 +0000316Register scavenging is now implemented. The example in the previous version
317of this document produces optimal code at -O2.
Evan Chenga8e29892007-01-19 07:51:42 +0000318
319//===---------------------------------------------------------------------===//
320
321Pre-/post- indexed load / stores:
322
3231) We should not make the pre/post- indexed load/store transform if the base ptr
324is guaranteed to be live beyond the load/store. This can happen if the base
325ptr is live out of the block we are performing the optimization. e.g.
326
327mov r1, r2
328ldr r3, [r1], #4
329...
330
331vs.
332
333ldr r3, [r2]
334add r1, r2, #4
335...
336
337In most cases, this is just a wasted optimization. However, sometimes it can
338negatively impact the performance because two-address code is more restrictive
339when it comes to scheduling.
340
341Unfortunately, liveout information is currently unavailable during DAG combine
342time.
343
3442) Consider spliting a indexed load / store into a pair of add/sub + load/store
345 to solve #1 (in TwoAddressInstructionPass.cpp).
346
3473) Enhance LSR to generate more opportunities for indexed ops.
348
3494) Once we added support for multiple result patterns, write indexed loads
350 patterns instead of C++ instruction selection code.
351
3525) Use FLDM / FSTM to emulate indexed FP load / store.
353
354//===---------------------------------------------------------------------===//
355
356We should add i64 support to take advantage of the 64-bit load / stores.
357We can add a pseudo i64 register class containing pseudo registers that are
358register pairs. All other ops (e.g. add, sub) would be expanded as usual.
359
360We need to add pseudo instructions (i.e. gethi / getlo) to extract i32 registers
361from the i64 register. These are single moves which can be eliminated if the
362destination register is a sub-register of the source. We should implement proper
363subreg support in the register allocator to coalesce these away.
364
365There are other minor issues such as multiple instructions for a spill / restore
366/ move.
367
368//===---------------------------------------------------------------------===//
369
370Implement support for some more tricky ways to materialize immediates. For
371example, to get 0xffff8000, we can use:
372
373mov r9, #&3f8000
374sub r9, r9, #&400000
375
376//===---------------------------------------------------------------------===//
377
378We sometimes generate multiple add / sub instructions to update sp in prologue
379and epilogue if the inc / dec value is too large to fit in a single immediate
380operand. In some cases, perhaps it might be better to load the value from a
381constantpool instead.
382
383//===---------------------------------------------------------------------===//
384
385GCC generates significantly better code for this function.
386
387int foo(int StackPtr, unsigned char *Line, unsigned char *Stack, int LineLen) {
388 int i = 0;
389
390 if (StackPtr != 0) {
391 while (StackPtr != 0 && i < (((LineLen) < (32768))? (LineLen) : (32768)))
392 Line[i++] = Stack[--StackPtr];
393 if (LineLen > 32768)
394 {
395 while (StackPtr != 0 && i < LineLen)
396 {
397 i++;
398 --StackPtr;
399 }
400 }
401 }
402 return StackPtr;
403}
404
405//===---------------------------------------------------------------------===//
406
407This should compile to the mlas instruction:
408int mlas(int x, int y, int z) { return ((x * y + z) < 0) ? 7 : 13; }
409
410//===---------------------------------------------------------------------===//
411
412At some point, we should triage these to see if they still apply to us:
413
414http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19598
415http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18560
416http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27016
417
418http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11831
419http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11826
420http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11825
421http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11824
422http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11823
423http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11820
424http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10982
425
426http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10242
427http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9831
428http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9760
429http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9759
430http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9703
431http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9702
432http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9663
433
434http://www.inf.u-szeged.hu/gcc-arm/
435http://citeseer.ist.psu.edu/debus04linktime.html
436
437//===---------------------------------------------------------------------===//
Evan Cheng2265b492007-03-09 19:34:51 +0000438
Dale Johannesen818c0852007-03-09 19:18:59 +0000439gcc generates smaller code for this function at -O2 or -Os:
Dale Johannesena6bc6fc2007-03-09 17:58:17 +0000440
441void foo(signed char* p) {
442 if (*p == 3)
443 bar();
444 else if (*p == 4)
445 baz();
446 else if (*p == 5)
447 quux();
448}
449
450llvm decides it's a good idea to turn the repeated if...else into a
451binary tree, as if it were a switch; the resulting code requires -1
452compare-and-branches when *p<=2 or *p==5, the same number if *p==4
453or *p>6, and +1 if *p==3. So it should be a speed win
454(on balance). However, the revised code is larger, with 4 conditional
455branches instead of 3.
456
457More seriously, there is a byte->word extend before
458each comparison, where there should be only one, and the condition codes
459are not remembered when the same two values are compared twice.
460
Evan Cheng2265b492007-03-09 19:34:51 +0000461//===---------------------------------------------------------------------===//
462
463More register scavenging work:
464
4651. Use the register scavenger to track frame index materialized into registers
466 (those that do not fit in addressing modes) to allow reuse in the same BB.
4672. Finish scavenging for Thumb.
4683. We know some spills and restores are unnecessary. The issue is once live
469 intervals are merged, they are not never split. So every def is spilled
470 and every use requires a restore if the register allocator decides the
471 resulting live interval is not assigned a physical register. It may be
472 possible (with the help of the scavenger) to turn some spill / restore
473 pairs into register copies.
Evan Cheng44f4fca2007-03-09 19:35:33 +0000474
475//===---------------------------------------------------------------------===//
476
Evan Chenga125cbe2007-03-20 22:32:39 +0000477More LSR enhancements possible:
478
4791. Teach LSR about pre- and post- indexed ops to allow iv increment be merged
480 in a load / store.
4812. Allow iv reuse even when a type conversion is required. For example, i8
482 and i32 load / store addressing modes are identical.
Chris Lattner3c30d102007-04-17 18:03:00 +0000483
484
485//===---------------------------------------------------------------------===//
486
487This:
488
489int foo(int a, int b, int c, int d) {
490 long long acc = (long long)a * (long long)b;
491 acc += (long long)c * (long long)d;
492 return (int)(acc >> 32);
493}
494
495Should compile to use SMLAL (Signed Multiply Accumulate Long) which multiplies
496two signed 32-bit values to produce a 64-bit value, and accumulates this with
497a 64-bit value.
498
Chris Lattnera3f61df2007-11-27 22:41:52 +0000499We currently get this with both v4 and v6:
Chris Lattner3c30d102007-04-17 18:03:00 +0000500
501_foo:
Chris Lattnera3f61df2007-11-27 22:41:52 +0000502 smull r1, r0, r1, r0
503 smull r3, r2, r3, r2
504 adds r3, r3, r1
505 adc r0, r2, r0
Chris Lattner3c30d102007-04-17 18:03:00 +0000506 bx lr
507
Chris Lattner3c30d102007-04-17 18:03:00 +0000508//===---------------------------------------------------------------------===//
Chris Lattnerbf8ae842007-09-10 21:43:18 +0000509
510This:
511 #include <algorithm>
512 std::pair<unsigned, bool> full_add(unsigned a, unsigned b)
513 { return std::make_pair(a + b, a + b < a); }
514 bool no_overflow(unsigned a, unsigned b)
515 { return !full_add(a, b).second; }
516
517Should compile to:
518
519_Z8full_addjj:
520 adds r2, r1, r2
521 movcc r1, #0
522 movcs r1, #1
523 str r2, [r0, #0]
524 strb r1, [r0, #4]
525 mov pc, lr
526
527_Z11no_overflowjj:
528 cmn r0, r1
529 movcs r0, #0
530 movcc r0, #1
531 mov pc, lr
532
533not:
534
535__Z8full_addjj:
536 add r3, r2, r1
537 str r3, [r0]
538 mov r2, #1
539 mov r12, #0
540 cmp r3, r1
541 movlo r12, r2
542 str r12, [r0, #+4]
543 bx lr
544__Z11no_overflowjj:
545 add r3, r1, r0
546 mov r2, #1
547 mov r1, #0
548 cmp r3, r0
549 movhs r1, r2
550 mov r0, r1
551 bx lr
552
553//===---------------------------------------------------------------------===//
Chris Lattner3a7c33a2007-10-19 03:29:26 +0000554