Chris Lattner | 0095054 | 2001-06-06 20:29:01 +0000 | [diff] [blame] | 1 | Date: Fri, 1 Jun 2001 17:08:44 -0500 (CDT) |
| 2 | From: Chris Lattner <sabre@nondot.org> |
| 3 | To: Vikram S. Adve <vadve@cs.uiuc.edu> |
| 4 | Subject: RE: Interesting: GCC passes |
| 5 | |
| 6 | > That is very interesting. I agree that some of these could be done on LLVM |
| 7 | > at link-time, but it is the extra time required that concerns me. Link-time |
| 8 | > optimization is severely time-constrained. |
| 9 | |
| 10 | If we were to reimplement any of these optimizations, I assume that we |
| 11 | could do them a translation unit at a time, just as GCC does now. This |
| 12 | would lead to a pipeline like this: |
| 13 | |
| 14 | Static optimizations, xlation unit at a time: |
| 15 | .c --GCC--> .llvm --llvmopt--> .llvm |
| 16 | |
| 17 | Link time optimizations: |
| 18 | .llvm --llvm-ld--> .llvm --llvm-link-opt--> .llvm |
| 19 | |
| 20 | Of course, many optimizations could be shared between llvmopt and |
| 21 | llvm-link-opt, but the wouldn't need to be shared... Thus compile time |
| 22 | could be faster, because we are using a "smarter" IR (SSA based). |
| 23 | |
| 24 | > BTW, about SGI, "borrowing" SSA-based optimizations from one compiler and |
| 25 | > putting it into another is not necessarily easier than re-doing it. |
| 26 | > Optimization code is usually heavily tied in to the specific IR they use. |
| 27 | |
| 28 | Understood. The only reason that I brought this up is because SGI's IR is |
| 29 | more similar to LLVM than it is different in many respects (SSA based, |
| 30 | relatively low level, etc), and could be easily adapted. Also their |
| 31 | optimizations are written in C++ and are actually somewhat |
| 32 | structured... of course it would be no walk in the park, but it would be |
| 33 | much less time consuming to adapt, say, SSA-PRE than to rewrite it. |
| 34 | |
| 35 | > But your larger point is valid that adding SSA based optimizations is |
| 36 | > feasible and should be fun. (Again, link time cost is the issue.) |
| 37 | |
| 38 | Assuming linktime cost wasn't an issue, the question is: |
| 39 | Does using GCC's backend buy us anything? |
| 40 | |
| 41 | > It also occurs to me that GCC is probably doing quite a bit of back-end |
| 42 | > optimization (step 16 in your list). Do you have a breakdown of that? |
| 43 | |
| 44 | Not really. The irritating part of GCC is that it mixes it all up and |
| 45 | doesn't have a clean seperation of concerns. A lot of the "back end |
| 46 | optimization" happens right along with other data optimizations (ie, CSE |
| 47 | of machine specific things). |
| 48 | |
| 49 | As far as REAL back end optimizations go, it looks something like this: |
| 50 | |
| 51 | 1. Instruction combination: try to make CISCy instructions, if available |
| 52 | 2. Register movement: try to get registers in the right places for the |
| 53 | architecture to avoid register to register moves. For example, try to get |
| 54 | the first argument of a function to naturally land in %o0 for sparc. |
| 55 | 3. Instruction scheduling: 'nuff said :) |
| 56 | 4. Register class preferencing: ?? |
| 57 | 5. Local register allocation |
| 58 | 6. global register allocation |
| 59 | 7. Spilling |
| 60 | 8. Local regalloc |
| 61 | 9. Jump optimization |
| 62 | 10. Delay slot scheduling |
| 63 | 11. Branch shorting for CISC machines |
| 64 | 12. Instruction selection & peephole optimization |
| 65 | 13. Debug info output |
| 66 | |
| 67 | But none of this would be usable for LLVM anyways, unless we were using |
| 68 | GCC as a static compiler. |
| 69 | |
| 70 | -Chris |
| 71 | |