| Evan Cheng | 197d19d | 2007-03-28 08:30:04 +0000 | [diff] [blame] | 1 | //===---------------------------------------------------------------------===// | 
 | 2 |  | 
| Evan Cheng | c3c7088 | 2007-03-20 22:22:38 +0000 | [diff] [blame] | 3 | Common register allocation / spilling problem: | 
 | 4 |  | 
| Anton Korobeynikov | bed2946 | 2007-04-16 18:10:23 +0000 | [diff] [blame] | 5 |         mul lr, r4, lr | 
 | 6 |         str lr, [sp, #+52] | 
 | 7 |         ldr lr, [r1, #+32] | 
 | 8 |         sxth r3, r3 | 
 | 9 |         ldr r4, [sp, #+52] | 
 | 10 |         mla r4, r3, lr, r4 | 
| Evan Cheng | c3c7088 | 2007-03-20 22:22:38 +0000 | [diff] [blame] | 11 |  | 
 | 12 | can be: | 
 | 13 |  | 
| Anton Korobeynikov | bed2946 | 2007-04-16 18:10:23 +0000 | [diff] [blame] | 14 |         mul lr, r4, lr | 
| Evan Cheng | c3c7088 | 2007-03-20 22:22:38 +0000 | [diff] [blame] | 15 |         mov r4, lr | 
| Anton Korobeynikov | bed2946 | 2007-04-16 18:10:23 +0000 | [diff] [blame] | 16 |         str lr, [sp, #+52] | 
 | 17 |         ldr lr, [r1, #+32] | 
 | 18 |         sxth r3, r3 | 
 | 19 |         mla r4, r3, lr, r4 | 
| Evan Cheng | c3c7088 | 2007-03-20 22:22:38 +0000 | [diff] [blame] | 20 |  | 
 | 21 | and then "merge" mul and mov: | 
 | 22 |  | 
| Anton Korobeynikov | bed2946 | 2007-04-16 18:10:23 +0000 | [diff] [blame] | 23 |         mul r4, r4, lr | 
 | 24 |         str lr, [sp, #+52] | 
 | 25 |         ldr lr, [r1, #+32] | 
 | 26 |         sxth r3, r3 | 
 | 27 |         mla r4, r3, lr, r4 | 
| Evan Cheng | c3c7088 | 2007-03-20 22:22:38 +0000 | [diff] [blame] | 28 |  | 
 | 29 | It also increase the likelyhood the store may become dead. | 
| Evan Cheng | 197d19d | 2007-03-28 08:30:04 +0000 | [diff] [blame] | 30 |  | 
 | 31 | //===---------------------------------------------------------------------===// | 
 | 32 |  | 
 | 33 | I think we should have a "hasSideEffects" flag (which is automatically set for | 
 | 34 | stuff that "isLoad" "isCall" etc), and the remat pass should eventually be able | 
 | 35 | to remat any instruction that has no side effects, if it can handle it and if | 
 | 36 | profitable. | 
 | 37 |  | 
 | 38 | For now, I'd suggest having the remat stuff work like this: | 
 | 39 |  | 
 | 40 | 1. I need to spill/reload this thing. | 
 | 41 | 2. Check to see if it has side effects. | 
 | 42 | 3. Check to see if it is simple enough: e.g. it only has one register | 
 | 43 | destination and no register input. | 
 | 44 | 4. If so, clone the instruction, do the xform, etc. | 
 | 45 |  | 
 | 46 | Advantages of this are: | 
 | 47 |  | 
 | 48 | 1. the .td file describes the behavior of the instructions, not the way the | 
 | 49 |    algorithm should work. | 
 | 50 | 2. as remat gets smarter in the future, we shouldn't have to be changing the .td | 
 | 51 |    files. | 
 | 52 | 3. it is easier to explain what the flag means in the .td file, because you | 
 | 53 |    don't have to pull in the explanation of how the current remat algo works. | 
 | 54 |  | 
 | 55 | Some potential added complexities: | 
 | 56 |  | 
 | 57 | 1. Some instructions have to be glued to it's predecessor or successor. All of | 
 | 58 |    the PC relative instructions and condition code setting instruction. We could | 
 | 59 |    mark them as hasSideEffects, but that's not quite right. PC relative loads | 
 | 60 |    from constantpools can be remat'ed, for example. But it requires more than | 
 | 61 |    just cloning the instruction. Some instructions can be remat'ed but it | 
 | 62 |    expands to more than one instruction. But allocator will have to make a | 
 | 63 |    decision. | 
 | 64 |  | 
 | 65 | 4. As stated in 3, not as simple as cloning in some cases. The target will have | 
 | 66 |    to decide how to remat it. For example, an ARM 2-piece constant generation | 
 | 67 |    instruction is remat'ed as a load from constantpool. | 
| Evan Cheng | 9747778 | 2007-03-29 02:48:56 +0000 | [diff] [blame] | 68 |  | 
 | 69 | //===---------------------------------------------------------------------===// | 
 | 70 |  | 
 | 71 | bb27 ... | 
 | 72 |         ... | 
| Anton Korobeynikov | bed2946 | 2007-04-16 18:10:23 +0000 | [diff] [blame] | 73 |         %reg1037 = ADDri %reg1039, 1 | 
 | 74 |         %reg1038 = ADDrs %reg1032, %reg1039, %NOREG, 10 | 
| Evan Cheng | 9747778 | 2007-03-29 02:48:56 +0000 | [diff] [blame] | 75 |     Successors according to CFG: 0x8b03bf0 (#5) | 
 | 76 |  | 
 | 77 | bb76 (0x8b03bf0, LLVM BB @0x8b032d0, ID#5): | 
 | 78 |     Predecessors according to CFG: 0x8b0c5f0 (#3) 0x8b0a7c0 (#4) | 
| Anton Korobeynikov | bed2946 | 2007-04-16 18:10:23 +0000 | [diff] [blame] | 79 |         %reg1039 = PHI %reg1070, mbb<bb76.outer,0x8b0c5f0>, %reg1037, mbb<bb27,0x8b0a7c0> | 
| Evan Cheng | 9747778 | 2007-03-29 02:48:56 +0000 | [diff] [blame] | 80 |  | 
 | 81 | Note ADDri is not a two-address instruction. However, its result %reg1037 is an | 
 | 82 | operand of the PHI node in bb76 and its operand %reg1039 is the result of the | 
 | 83 | PHI node. We should treat it as a two-address code and make sure the ADDri is | 
 | 84 | scheduled after any node that reads %reg1039. | 
 | 85 |  | 
 | 86 | //===---------------------------------------------------------------------===// | 
 | 87 |  | 
| Evan Cheng | e47e75b | 2007-04-30 18:42:09 +0000 | [diff] [blame] | 88 | Use local info (i.e. register scavenger) to assign it a free register to allow | 
 | 89 | reuse: | 
| Bill Wendling | a6211d9 | 2008-08-22 00:04:26 +0000 | [diff] [blame] | 90 |         ldr r3, [sp, #+4] | 
 | 91 |         add r3, r3, #3 | 
 | 92 |         ldr r2, [sp, #+8] | 
 | 93 |         add r2, r2, #2 | 
 | 94 |         ldr r1, [sp, #+4]  <== | 
 | 95 |         add r1, r1, #1 | 
 | 96 |         ldr r0, [sp, #+4] | 
 | 97 |         add r0, r0, #2 | 
| Evan Cheng | e47e75b | 2007-04-30 18:42:09 +0000 | [diff] [blame] | 98 |  | 
 | 99 | //===---------------------------------------------------------------------===// | 
 | 100 |  | 
 | 101 | LLVM aggressively lift CSE out of loop. Sometimes this can be negative side- | 
 | 102 | effects: | 
 | 103 |  | 
 | 104 | R1 = X + 4 | 
 | 105 | R2 = X + 7 | 
 | 106 | R3 = X + 15 | 
 | 107 |  | 
 | 108 | loop: | 
 | 109 | load [i + R1] | 
 | 110 | ... | 
 | 111 | load [i + R2] | 
 | 112 | ... | 
 | 113 | load [i + R3] | 
 | 114 |  | 
 | 115 | Suppose there is high register pressure, R1, R2, R3, can be spilled. We need | 
 | 116 | to implement proper re-materialization to handle this: | 
 | 117 |  | 
 | 118 | R1 = X + 4 | 
 | 119 | R2 = X + 7 | 
 | 120 | R3 = X + 15 | 
 | 121 |  | 
 | 122 | loop: | 
 | 123 | R1 = X + 4  @ re-materialized | 
 | 124 | load [i + R1] | 
 | 125 | ... | 
 | 126 | R2 = X + 7 @ re-materialized | 
 | 127 | load [i + R2] | 
 | 128 | ... | 
 | 129 | R3 = X + 15 @ re-materialized | 
 | 130 | load [i + R3] | 
 | 131 |  | 
 | 132 | Furthermore, with re-association, we can enable sharing: | 
 | 133 |  | 
 | 134 | R1 = X + 4 | 
 | 135 | R2 = X + 7 | 
 | 136 | R3 = X + 15 | 
 | 137 |  | 
 | 138 | loop: | 
 | 139 | T = i + X | 
 | 140 | load [T + 4] | 
 | 141 | ... | 
 | 142 | load [T + 7] | 
 | 143 | ... | 
 | 144 | load [T + 15] | 
| Dale Johannesen | a469b69 | 2007-05-18 18:46:40 +0000 | [diff] [blame] | 145 | //===---------------------------------------------------------------------===// | 
| Evan Cheng | 2d98238 | 2007-09-10 22:11:18 +0000 | [diff] [blame] | 146 |  | 
 | 147 | It's not always a good idea to choose rematerialization over spilling. If all | 
 | 148 | the load / store instructions would be folded then spilling is cheaper because | 
 | 149 | it won't require new live intervals / registers. See 2003-05-31-LongShifts for | 
 | 150 | an example. | 
| Gordon Henriksen | 364caf0 | 2007-09-29 02:13:43 +0000 | [diff] [blame] | 151 |  | 
 | 152 | //===---------------------------------------------------------------------===// | 
 | 153 |  | 
| Gordon Henriksen | 364caf0 | 2007-09-29 02:13:43 +0000 | [diff] [blame] | 154 | With a copying garbage collector, derived pointers must not be retained across | 
 | 155 | collector safe points; the collector could move the objects and invalidate the | 
 | 156 | derived pointer. This is bad enough in the first place, but safe points can | 
 | 157 | crop up unpredictably. Consider: | 
 | 158 |  | 
 | 159 |         %array = load { i32, [0 x %obj] }** %array_addr | 
 | 160 |         %nth_el = getelementptr { i32, [0 x %obj] }* %array, i32 0, i32 %n | 
 | 161 |         %old = load %obj** %nth_el | 
 | 162 |         %z = div i64 %x, %y | 
 | 163 |         store %obj* %new, %obj** %nth_el | 
 | 164 |  | 
 | 165 | If the i64 division is lowered to a libcall, then a safe point will (must) | 
 | 166 | appear for the call site. If a collection occurs, %array and %nth_el no longer | 
 | 167 | point into the correct object. | 
 | 168 |  | 
 | 169 | The fix for this is to copy address calculations so that dependent pointers | 
 | 170 | are never live across safe point boundaries. But the loads cannot be copied | 
 | 171 | like this if there was an intervening store, so may be hard to get right. | 
 | 172 |  | 
 | 173 | Only a concurrent mutator can trigger a collection at the libcall safe point. | 
 | 174 | So single-threaded programs do not have this requirement, even with a copying | 
 | 175 | collector. Still, LLVM optimizations would probably undo a front-end's careful | 
 | 176 | work. | 
 | 177 |  | 
 | 178 | //===---------------------------------------------------------------------===// | 
 | 179 |  | 
 | 180 | The ocaml frametable structure supports liveness information. It would be good | 
 | 181 | to support it. | 
| Bill Wendling | da6efc5 | 2007-10-25 19:49:32 +0000 | [diff] [blame] | 182 |  | 
 | 183 | //===---------------------------------------------------------------------===// | 
 | 184 |  | 
 | 185 | The FIXME in ComputeCommonTailLength in BranchFolding.cpp needs to be | 
 | 186 | revisited. The check is there to work around a misuse of directives in inline | 
 | 187 | assembly. | 
 | 188 |  | 
 | 189 | //===---------------------------------------------------------------------===// | 
| Gordon Henriksen | ce22477 | 2008-01-07 01:30:38 +0000 | [diff] [blame] | 190 |  | 
 | 191 | It would be good to detect collector/target compatibility instead of silently | 
 | 192 | doing the wrong thing. | 
 | 193 |  | 
 | 194 | //===---------------------------------------------------------------------===// | 
| Chris Lattner | be036a9 | 2008-02-10 01:01:35 +0000 | [diff] [blame] | 195 |  | 
 | 196 | It would be really nice to be able to write patterns in .td files for copies, | 
 | 197 | which would eliminate a bunch of explicit predicates on them (e.g. no side  | 
 | 198 | effects).  Once this is in place, it would be even better to have tblgen  | 
 | 199 | synthesize the various copy insertion/inspection methods in TargetInstrInfo. | 
| Evan Cheng | 877333b | 2008-06-06 19:52:44 +0000 | [diff] [blame] | 200 |  | 
 | 201 | //===---------------------------------------------------------------------===// | 
 | 202 |  | 
 | 203 | Stack coloring improvments: | 
 | 204 |  | 
 | 205 | 1. Do proper LiveStackAnalysis on all stack objects including those which are | 
 | 206 |    not spill slots. | 
 | 207 | 2. Reorder objects to fill in gaps between objects. | 
 | 208 |    e.g. 4, 1, <gap>, 4, 1, 1, 1, <gap>, 4 => 4, 1, 1, 1, 1, 4, 4 |