blob: b655dda41153016eb2bc77e3aa33bcbf89768858 [file] [log] [blame]
Evan Cheng197d19d2007-03-28 08:30:04 +00001//===---------------------------------------------------------------------===//
2
Evan Chengc3c70882007-03-20 22:22:38 +00003Common register allocation / spilling problem:
4
Anton Korobeynikovbed29462007-04-16 18:10:23 +00005 mul lr, r4, lr
6 str lr, [sp, #+52]
7 ldr lr, [r1, #+32]
8 sxth r3, r3
9 ldr r4, [sp, #+52]
10 mla r4, r3, lr, r4
Evan Chengc3c70882007-03-20 22:22:38 +000011
12can be:
13
Anton Korobeynikovbed29462007-04-16 18:10:23 +000014 mul lr, r4, lr
Evan Chengc3c70882007-03-20 22:22:38 +000015 mov r4, lr
Anton Korobeynikovbed29462007-04-16 18:10:23 +000016 str lr, [sp, #+52]
17 ldr lr, [r1, #+32]
18 sxth r3, r3
19 mla r4, r3, lr, r4
Evan Chengc3c70882007-03-20 22:22:38 +000020
21and then "merge" mul and mov:
22
Anton Korobeynikovbed29462007-04-16 18:10:23 +000023 mul r4, r4, lr
24 str lr, [sp, #+52]
25 ldr lr, [r1, #+32]
26 sxth r3, r3
27 mla r4, r3, lr, r4
Evan Chengc3c70882007-03-20 22:22:38 +000028
29It also increase the likelyhood the store may become dead.
Evan Cheng197d19d2007-03-28 08:30:04 +000030
31//===---------------------------------------------------------------------===//
32
Evan Cheng97477782007-03-29 02:48:56 +000033bb27 ...
34 ...
Anton Korobeynikovbed29462007-04-16 18:10:23 +000035 %reg1037 = ADDri %reg1039, 1
36 %reg1038 = ADDrs %reg1032, %reg1039, %NOREG, 10
Evan Cheng97477782007-03-29 02:48:56 +000037 Successors according to CFG: 0x8b03bf0 (#5)
38
39bb76 (0x8b03bf0, LLVM BB @0x8b032d0, ID#5):
40 Predecessors according to CFG: 0x8b0c5f0 (#3) 0x8b0a7c0 (#4)
Anton Korobeynikovbed29462007-04-16 18:10:23 +000041 %reg1039 = PHI %reg1070, mbb<bb76.outer,0x8b0c5f0>, %reg1037, mbb<bb27,0x8b0a7c0>
Evan Cheng97477782007-03-29 02:48:56 +000042
43Note ADDri is not a two-address instruction. However, its result %reg1037 is an
44operand of the PHI node in bb76 and its operand %reg1039 is the result of the
45PHI node. We should treat it as a two-address code and make sure the ADDri is
46scheduled after any node that reads %reg1039.
47
48//===---------------------------------------------------------------------===//
49
Evan Chenge47e75b2007-04-30 18:42:09 +000050Use local info (i.e. register scavenger) to assign it a free register to allow
51reuse:
Bill Wendlinga6211d92008-08-22 00:04:26 +000052 ldr r3, [sp, #+4]
53 add r3, r3, #3
54 ldr r2, [sp, #+8]
55 add r2, r2, #2
56 ldr r1, [sp, #+4] <==
57 add r1, r1, #1
58 ldr r0, [sp, #+4]
59 add r0, r0, #2
Evan Chenge47e75b2007-04-30 18:42:09 +000060
61//===---------------------------------------------------------------------===//
62
63LLVM aggressively lift CSE out of loop. Sometimes this can be negative side-
64effects:
65
66R1 = X + 4
67R2 = X + 7
68R3 = X + 15
69
70loop:
71load [i + R1]
72...
73load [i + R2]
74...
75load [i + R3]
76
77Suppose there is high register pressure, R1, R2, R3, can be spilled. We need
78to implement proper re-materialization to handle this:
79
80R1 = X + 4
81R2 = X + 7
82R3 = X + 15
83
84loop:
85R1 = X + 4 @ re-materialized
86load [i + R1]
87...
88R2 = X + 7 @ re-materialized
89load [i + R2]
90...
91R3 = X + 15 @ re-materialized
92load [i + R3]
93
94Furthermore, with re-association, we can enable sharing:
95
96R1 = X + 4
97R2 = X + 7
98R3 = X + 15
99
100loop:
101T = i + X
102load [T + 4]
103...
104load [T + 7]
105...
106load [T + 15]
Dale Johannesena469b692007-05-18 18:46:40 +0000107//===---------------------------------------------------------------------===//
Evan Cheng2d982382007-09-10 22:11:18 +0000108
109It's not always a good idea to choose rematerialization over spilling. If all
110the load / store instructions would be folded then spilling is cheaper because
111it won't require new live intervals / registers. See 2003-05-31-LongShifts for
112an example.
Gordon Henriksen364caf02007-09-29 02:13:43 +0000113
114//===---------------------------------------------------------------------===//
115
Gordon Henriksen364caf02007-09-29 02:13:43 +0000116With a copying garbage collector, derived pointers must not be retained across
117collector safe points; the collector could move the objects and invalidate the
118derived pointer. This is bad enough in the first place, but safe points can
119crop up unpredictably. Consider:
120
121 %array = load { i32, [0 x %obj] }** %array_addr
122 %nth_el = getelementptr { i32, [0 x %obj] }* %array, i32 0, i32 %n
123 %old = load %obj** %nth_el
124 %z = div i64 %x, %y
125 store %obj* %new, %obj** %nth_el
126
127If the i64 division is lowered to a libcall, then a safe point will (must)
128appear for the call site. If a collection occurs, %array and %nth_el no longer
129point into the correct object.
130
131The fix for this is to copy address calculations so that dependent pointers
132are never live across safe point boundaries. But the loads cannot be copied
133like this if there was an intervening store, so may be hard to get right.
134
135Only a concurrent mutator can trigger a collection at the libcall safe point.
136So single-threaded programs do not have this requirement, even with a copying
137collector. Still, LLVM optimizations would probably undo a front-end's careful
138work.
139
140//===---------------------------------------------------------------------===//
141
142The ocaml frametable structure supports liveness information. It would be good
143to support it.
Bill Wendlingda6efc52007-10-25 19:49:32 +0000144
145//===---------------------------------------------------------------------===//
146
147The FIXME in ComputeCommonTailLength in BranchFolding.cpp needs to be
148revisited. The check is there to work around a misuse of directives in inline
149assembly.
150
151//===---------------------------------------------------------------------===//
Gordon Henriksence224772008-01-07 01:30:38 +0000152
153It would be good to detect collector/target compatibility instead of silently
154doing the wrong thing.
155
156//===---------------------------------------------------------------------===//
Chris Lattnerbe036a92008-02-10 01:01:35 +0000157
158It would be really nice to be able to write patterns in .td files for copies,
159which would eliminate a bunch of explicit predicates on them (e.g. no side
160effects). Once this is in place, it would be even better to have tblgen
161synthesize the various copy insertion/inspection methods in TargetInstrInfo.
Evan Cheng877333b2008-06-06 19:52:44 +0000162
163//===---------------------------------------------------------------------===//
164
165Stack coloring improvments:
166
1671. Do proper LiveStackAnalysis on all stack objects including those which are
168 not spill slots.
1692. Reorder objects to fill in gaps between objects.
170 e.g. 4, 1, <gap>, 4, 1, 1, 1, <gap>, 4 => 4, 1, 1, 1, 1, 4, 4
Dan Gohman363bbc02009-10-13 23:58:05 +0000171
172//===---------------------------------------------------------------------===//
173
174The scheduler should be able to sort nearby instructions by their address. For
175example, in an expanded memset sequence it's not uncommon to see code like this:
176
177 movl $0, 4(%rdi)
178 movl $0, 8(%rdi)
179 movl $0, 12(%rdi)
180 movl $0, 0(%rdi)
181
182Each of the stores is independent, and the scheduler is currently making an
183arbitrary decision about the order.
184
185//===---------------------------------------------------------------------===//
186
187Another opportunitiy in this code is that the $0 could be moved to a register:
188
189 movl $0, 4(%rdi)
190 movl $0, 8(%rdi)
191 movl $0, 12(%rdi)
192 movl $0, 0(%rdi)
193
194This would save substantial code size, especially for longer sequences like
195this. It would be easy to have a rule telling isel to avoid matching MOV32mi
196if the immediate has more than some fixed number of uses. It's more involved
197to teach the register allocator how to do late folding to recover from
198excessive register pressure.
199