blob: f39d8b2a244bf2af89f180d6699d1e12692fec9d [file] [log] [blame]
Chris Lattner086c0142006-02-03 06:21:43 +00001Target Independent Opportunities:
2
Chris Lattnerf308ea02006-09-28 06:01:17 +00003//===---------------------------------------------------------------------===//
4
Chris Lattner4e4e4612008-01-07 07:46:23 +00005We should make the various target's "IMPLICIT_DEF" instructions be a single
6target-independent opcode like TargetInstrInfo::INLINEASM. This would allow
7us to eliminate the TargetInstrDesc::isImplicitDef() method, and would allow
8us to avoid having to define this for every target for every register class.
9
10//===---------------------------------------------------------------------===//
11
Chris Lattner9b62b452006-11-14 01:57:53 +000012With the recent changes to make the implicit def/use set explicit in
13machineinstrs, we should change the target descriptions for 'call' instructions
14so that the .td files don't list all the call-clobbered registers as implicit
15defs. Instead, these should be added by the code generator (e.g. on the dag).
16
17This has a number of uses:
18
191. PPC32/64 and X86 32/64 can avoid having multiple copies of call instructions
20 for their different impdef sets.
212. Targets with multiple calling convs (e.g. x86) which have different clobber
22 sets don't need copies of call instructions.
233. 'Interprocedural register allocation' can be done to reduce the clobber sets
24 of calls.
25
26//===---------------------------------------------------------------------===//
27
Nate Begeman81e80972006-03-17 01:40:33 +000028Make the PPC branch selector target independant
29
30//===---------------------------------------------------------------------===//
Chris Lattner086c0142006-02-03 06:21:43 +000031
32Get the C front-end to expand hypot(x,y) -> llvm.sqrt(x*x+y*y) when errno and
33precision don't matter (ffastmath). Misc/mandel will like this. :)
34
Chris Lattner086c0142006-02-03 06:21:43 +000035//===---------------------------------------------------------------------===//
36
37Solve this DAG isel folding deficiency:
38
39int X, Y;
40
41void fn1(void)
42{
43 X = X | (Y << 3);
44}
45
46compiles to
47
48fn1:
49 movl Y, %eax
50 shll $3, %eax
51 orl X, %eax
52 movl %eax, X
53 ret
54
55The problem is the store's chain operand is not the load X but rather
56a TokenFactor of the load X and load Y, which prevents the folding.
57
58There are two ways to fix this:
59
601. The dag combiner can start using alias analysis to realize that y/x
61 don't alias, making the store to X not dependent on the load from Y.
622. The generated isel could be made smarter in the case it can't
63 disambiguate the pointers.
64
65Number 1 is the preferred solution.
66
Evan Chenge617b082006-03-13 23:19:10 +000067This has been "fixed" by a TableGen hack. But that is a short term workaround
68which will be removed once the proper fix is made.
69
Chris Lattner086c0142006-02-03 06:21:43 +000070//===---------------------------------------------------------------------===//
71
Chris Lattnerb27b69f2006-03-04 01:19:34 +000072On targets with expensive 64-bit multiply, we could LSR this:
73
74for (i = ...; ++i) {
75 x = 1ULL << i;
76
77into:
78 long long tmp = 1;
79 for (i = ...; ++i, tmp+=tmp)
80 x = tmp;
81
82This would be a win on ppc32, but not x86 or ppc64.
83
Chris Lattnerad019932006-03-04 08:44:51 +000084//===---------------------------------------------------------------------===//
Chris Lattner5b0fe7d2006-03-05 20:00:08 +000085
86Shrink: (setlt (loadi32 P), 0) -> (setlt (loadi8 Phi), 0)
87
88//===---------------------------------------------------------------------===//
Chris Lattner549f27d22006-03-07 02:46:26 +000089
Chris Lattnerc20995e2006-03-11 20:17:08 +000090Reassociate should turn: X*X*X*X -> t=(X*X) (t*t) to eliminate a multiply.
91
92//===---------------------------------------------------------------------===//
93
Chris Lattner74cfb7d2006-03-11 20:20:40 +000094Interesting? testcase for add/shift/mul reassoc:
95
96int bar(int x, int y) {
97 return x*x*x+y+x*x*x*x*x*y*y*y*y;
98}
99int foo(int z, int n) {
100 return bar(z, n) + bar(2*z, 2*n);
101}
102
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000103Reassociate should handle the example in GCC PR16157.
104
Chris Lattner74cfb7d2006-03-11 20:20:40 +0000105//===---------------------------------------------------------------------===//
106
Chris Lattner82c78b22006-03-09 20:13:21 +0000107These two functions should generate the same code on big-endian systems:
108
109int g(int *j,int *l) { return memcmp(j,l,4); }
110int h(int *j, int *l) { return *j - *l; }
111
112this could be done in SelectionDAGISel.cpp, along with other special cases,
113for 1,2,4,8 bytes.
114
115//===---------------------------------------------------------------------===//
116
Chris Lattnerc04b4232006-03-22 07:33:46 +0000117It would be nice to revert this patch:
118http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20060213/031986.html
119
120And teach the dag combiner enough to simplify the code expanded before
121legalize. It seems plausible that this knowledge would let it simplify other
122stuff too.
123
Chris Lattnere6cd96d2006-03-24 19:59:17 +0000124//===---------------------------------------------------------------------===//
125
Reid Spencerac9dcb92007-02-15 03:39:18 +0000126For vector types, TargetData.cpp::getTypeInfo() returns alignment that is equal
Evan Cheng67d3d4c2006-03-31 22:35:14 +0000127to the type size. It works but can be overly conservative as the alignment of
Reid Spencerac9dcb92007-02-15 03:39:18 +0000128specific vector types are target dependent.
Chris Lattnereaa7c062006-04-01 04:08:29 +0000129
130//===---------------------------------------------------------------------===//
131
132We should add 'unaligned load/store' nodes, and produce them from code like
133this:
134
135v4sf example(float *P) {
136 return (v4sf){P[0], P[1], P[2], P[3] };
137}
138
139//===---------------------------------------------------------------------===//
140
Chris Lattner16abfdf2006-05-18 18:26:13 +0000141Add support for conditional increments, and other related patterns. Instead
142of:
143
144 movl 136(%esp), %eax
145 cmpl $0, %eax
146 je LBB16_2 #cond_next
147LBB16_1: #cond_true
148 incl _foo
149LBB16_2: #cond_next
150
151emit:
152 movl _foo, %eax
153 cmpl $1, %edi
154 sbbl $-1, %eax
155 movl %eax, _foo
156
157//===---------------------------------------------------------------------===//
Chris Lattner870cf1b2006-05-19 20:45:08 +0000158
159Combine: a = sin(x), b = cos(x) into a,b = sincos(x).
160
161Expand these to calls of sin/cos and stores:
162 double sincos(double x, double *sin, double *cos);
163 float sincosf(float x, float *sin, float *cos);
164 long double sincosl(long double x, long double *sin, long double *cos);
165
166Doing so could allow SROA of the destination pointers. See also:
167http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17687
168
169//===---------------------------------------------------------------------===//
Chris Lattnerf00f68a2006-05-19 21:01:38 +0000170
171Scalar Repl cannot currently promote this testcase to 'ret long cst':
172
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000173 %struct.X = type { i32, i32 }
Chris Lattnerf00f68a2006-05-19 21:01:38 +0000174 %struct.Y = type { %struct.X }
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000175
176define i64 @bar() {
177 %retval = alloca %struct.Y, align 8
178 %tmp12 = getelementptr %struct.Y* %retval, i32 0, i32 0, i32 0
179 store i32 0, i32* %tmp12
180 %tmp15 = getelementptr %struct.Y* %retval, i32 0, i32 0, i32 1
181 store i32 1, i32* %tmp15
182 %retval.upgrd.1 = bitcast %struct.Y* %retval to i64*
183 %retval.upgrd.2 = load i64* %retval.upgrd.1
184 ret i64 %retval.upgrd.2
Chris Lattnerf00f68a2006-05-19 21:01:38 +0000185}
186
187it should be extended to do so.
188
189//===---------------------------------------------------------------------===//
Chris Lattnere8263e62006-05-21 03:57:07 +0000190
Chris Lattnera5546fb2006-12-11 00:44:03 +0000191-scalarrepl should promote this to be a vector scalar.
192
193 %struct..0anon = type { <4 x float> }
194
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000195define void @test1(<4 x float> %V, float* %P) {
Chris Lattnera5546fb2006-12-11 00:44:03 +0000196 %u = alloca %struct..0anon, align 16
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000197 %tmp = getelementptr %struct..0anon* %u, i32 0, i32 0
Chris Lattnera5546fb2006-12-11 00:44:03 +0000198 store <4 x float> %V, <4 x float>* %tmp
199 %tmp1 = bitcast %struct..0anon* %u to [4 x float]*
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000200 %tmp.upgrd.1 = getelementptr [4 x float]* %tmp1, i32 0, i32 1
201 %tmp.upgrd.2 = load float* %tmp.upgrd.1
202 %tmp3 = mul float %tmp.upgrd.2, 2.000000e+00
Chris Lattnera5546fb2006-12-11 00:44:03 +0000203 store float %tmp3, float* %P
204 ret void
205}
206
207//===---------------------------------------------------------------------===//
208
Chris Lattnere8263e62006-05-21 03:57:07 +0000209Turn this into a single byte store with no load (the other 3 bytes are
210unmodified):
211
212void %test(uint* %P) {
213 %tmp = load uint* %P
214 %tmp14 = or uint %tmp, 3305111552
215 %tmp15 = and uint %tmp14, 3321888767
216 store uint %tmp15, uint* %P
217 ret void
218}
219
Chris Lattner9e18ef52006-05-30 21:29:15 +0000220//===---------------------------------------------------------------------===//
221
222dag/inst combine "clz(x)>>5 -> x==0" for 32-bit x.
223
224Compile:
225
226int bar(int x)
227{
228 int t = __builtin_clz(x);
229 return -(t>>5);
230}
231
232to:
233
234_bar: addic r3,r3,-1
235 subfe r3,r3,r3
236 blr
237
Chris Lattnercbce2f62006-09-15 20:31:36 +0000238//===---------------------------------------------------------------------===//
239
240Legalize should lower ctlz like this:
241 ctlz(x) = popcnt((x-1) & ~x)
242
243on targets that have popcnt but not ctlz. itanium, what else?
Chris Lattner9e18ef52006-05-30 21:29:15 +0000244
Chris Lattner7ed96ab2006-09-16 23:57:51 +0000245//===---------------------------------------------------------------------===//
246
247quantum_sigma_x in 462.libquantum contains the following loop:
248
249 for(i=0; i<reg->size; i++)
250 {
251 /* Flip the target bit of each basis state */
252 reg->node[i].state ^= ((MAX_UNSIGNED) 1 << target);
253 }
254
255Where MAX_UNSIGNED/state is a 64-bit int. On a 32-bit platform it would be just
256so cool to turn it into something like:
257
Chris Lattnerb33a42a2006-09-18 04:54:35 +0000258 long long Res = ((MAX_UNSIGNED) 1 << target);
Chris Lattner7ed96ab2006-09-16 23:57:51 +0000259 if (target < 32) {
260 for(i=0; i<reg->size; i++)
Chris Lattnerb33a42a2006-09-18 04:54:35 +0000261 reg->node[i].state ^= Res & 0xFFFFFFFFULL;
Chris Lattner7ed96ab2006-09-16 23:57:51 +0000262 } else {
263 for(i=0; i<reg->size; i++)
Chris Lattnerb33a42a2006-09-18 04:54:35 +0000264 reg->node[i].state ^= Res & 0xFFFFFFFF00000000ULL
Chris Lattner7ed96ab2006-09-16 23:57:51 +0000265 }
266
267... which would only do one 32-bit XOR per loop iteration instead of two.
268
269It would also be nice to recognize the reg->size doesn't alias reg->node[i], but
270alas...
271
272//===---------------------------------------------------------------------===//
Chris Lattnerfb981f32006-09-25 17:12:14 +0000273
274This isn't recognized as bswap by instcombine:
275
276unsigned int swap_32(unsigned int v) {
277 v = ((v & 0x00ff00ffU) << 8) | ((v & 0xff00ff00U) >> 8);
278 v = ((v & 0x0000ffffU) << 16) | ((v & 0xffff0000U) >> 16);
279 return v;
280}
281
Chris Lattnerf9bae432006-12-08 02:01:32 +0000282Nor is this (yes, it really is bswap):
283
284unsigned long reverse(unsigned v) {
285 unsigned t;
286 t = v ^ ((v << 16) | (v >> 16));
287 t &= ~0xff0000;
288 v = (v << 24) | (v >> 8);
289 return v ^ (t >> 8);
290}
291
Chris Lattnerfb981f32006-09-25 17:12:14 +0000292//===---------------------------------------------------------------------===//
293
294These should turn into single 16-bit (unaligned?) loads on little/big endian
295processors.
296
297unsigned short read_16_le(const unsigned char *adr) {
298 return adr[0] | (adr[1] << 8);
299}
300unsigned short read_16_be(const unsigned char *adr) {
301 return (adr[0] << 8) | adr[1];
302}
303
304//===---------------------------------------------------------------------===//
Chris Lattnercf103912006-10-24 16:12:47 +0000305
Reid Spencer1628cec2006-10-26 06:15:43 +0000306-instcombine should handle this transform:
Reid Spencere4d87aa2006-12-23 06:05:41 +0000307 icmp pred (sdiv X / C1 ), C2
Reid Spencer1628cec2006-10-26 06:15:43 +0000308when X, C1, and C2 are unsigned. Similarly for udiv and signed operands.
309
310Currently InstCombine avoids this transform but will do it when the signs of
311the operands and the sign of the divide match. See the FIXME in
312InstructionCombining.cpp in the visitSetCondInst method after the switch case
313for Instruction::UDiv (around line 4447) for more details.
314
315The SingleSource/Benchmarks/Shootout-C++/hash and hash2 tests have examples of
316this construct.
Chris Lattnerd7c628d2006-11-03 22:27:39 +0000317
318//===---------------------------------------------------------------------===//
319
320Instcombine misses several of these cases (see the testcase in the patch):
321http://gcc.gnu.org/ml/gcc-patches/2006-10/msg01519.html
322
Reid Spencer1628cec2006-10-26 06:15:43 +0000323//===---------------------------------------------------------------------===//
Chris Lattner578d2df2006-11-10 00:23:26 +0000324
325viterbi speeds up *significantly* if the various "history" related copy loops
326are turned into memcpy calls at the source level. We need a "loops to memcpy"
327pass.
328
329//===---------------------------------------------------------------------===//
Nick Lewyckybf637342006-11-13 00:23:28 +0000330
Chris Lattner03a6d962007-01-16 06:39:48 +0000331Consider:
332
333typedef unsigned U32;
334typedef unsigned long long U64;
335int test (U32 *inst, U64 *regs) {
336 U64 effective_addr2;
337 U32 temp = *inst;
338 int r1 = (temp >> 20) & 0xf;
339 int b2 = (temp >> 16) & 0xf;
340 effective_addr2 = temp & 0xfff;
341 if (b2) effective_addr2 += regs[b2];
342 b2 = (temp >> 12) & 0xf;
343 if (b2) effective_addr2 += regs[b2];
344 effective_addr2 &= regs[4];
345 if ((effective_addr2 & 3) == 0)
346 return 1;
347 return 0;
348}
349
350Note that only the low 2 bits of effective_addr2 are used. On 32-bit systems,
351we don't eliminate the computation of the top half of effective_addr2 because
352we don't have whole-function selection dags. On x86, this means we use one
353extra register for the function when effective_addr2 is declared as U64 than
354when it is declared U32.
355
356//===---------------------------------------------------------------------===//
357
Chris Lattner36e37d22007-02-13 21:44:43 +0000358Promote for i32 bswap can use i64 bswap + shr. Useful on targets with 64-bit
359regs and bswap, like itanium.
360
361//===---------------------------------------------------------------------===//
Chris Lattner1a77a552007-03-24 06:01:32 +0000362
363LSR should know what GPR types a target has. This code:
364
365volatile short X, Y; // globals
366
367void foo(int N) {
368 int i;
369 for (i = 0; i < N; i++) { X = i; Y = i*4; }
370}
371
372produces two identical IV's (after promotion) on PPC/ARM:
373
374LBB1_1: @bb.preheader
375 mov r3, #0
376 mov r2, r3
377 mov r1, r3
378LBB1_2: @bb
379 ldr r12, LCPI1_0
380 ldr r12, [r12]
381 strh r2, [r12]
382 ldr r12, LCPI1_1
383 ldr r12, [r12]
384 strh r3, [r12]
385 add r1, r1, #1 <- [0,+,1]
386 add r3, r3, #4
387 add r2, r2, #1 <- [0,+,1]
388 cmp r1, r0
389 bne LBB1_2 @bb
390
391
392//===---------------------------------------------------------------------===//
393
Chris Lattner5e14b0d2007-05-05 22:29:06 +0000394Tail call elim should be more aggressive, checking to see if the call is
395followed by an uncond branch to an exit block.
396
397; This testcase is due to tail-duplication not wanting to copy the return
398; instruction into the terminating blocks because there was other code
399; optimized out of the function after the taildup happened.
400;RUN: llvm-upgrade < %s | llvm-as | opt -tailcallelim | llvm-dis | not grep call
401
402int %t4(int %a) {
403entry:
404 %tmp.1 = and int %a, 1
405 %tmp.2 = cast int %tmp.1 to bool
406 br bool %tmp.2, label %then.0, label %else.0
407
408then.0:
409 %tmp.5 = add int %a, -1
410 %tmp.3 = call int %t4( int %tmp.5 )
411 br label %return
412
413else.0:
414 %tmp.7 = setne int %a, 0
415 br bool %tmp.7, label %then.1, label %return
416
417then.1:
418 %tmp.11 = add int %a, -2
419 %tmp.9 = call int %t4( int %tmp.11 )
420 br label %return
421
422return:
423 %result.0 = phi int [ 0, %else.0 ], [ %tmp.3, %then.0 ],
424 [ %tmp.9, %then.1 ]
425 ret int %result.0
426}
Chris Lattnerf110a2b2007-05-05 22:44:08 +0000427
428//===---------------------------------------------------------------------===//
429
Chris Lattnere1bb6ab2007-10-03 06:10:59 +0000430Tail recursion elimination is not transforming this function, because it is
431returning n, which fails the isDynamicConstant check in the accumulator
432recursion checks.
433
434long long fib(const long long n) {
435 switch(n) {
436 case 0:
437 case 1:
438 return n;
439 default:
440 return fib(n-1) + fib(n-2);
441 }
442}
443
444//===---------------------------------------------------------------------===//
445
Chris Lattnerf110a2b2007-05-05 22:44:08 +0000446Argument promotion should promote arguments for recursive functions, like
447this:
448
449; RUN: llvm-upgrade < %s | llvm-as | opt -argpromotion | llvm-dis | grep x.val
450
451implementation ; Functions:
452
453internal int %foo(int* %x) {
454entry:
455 %tmp = load int* %x
456 %tmp.foo = call int %foo(int *%x)
457 ret int %tmp.foo
458}
459
460int %bar(int* %x) {
461entry:
462 %tmp3 = call int %foo( int* %x) ; <int>[#uses=1]
463 ret int %tmp3
464}
465
Chris Lattner81f2d712007-12-05 23:05:06 +0000466//===---------------------------------------------------------------------===//
Chris Lattner166a2682007-12-28 04:42:05 +0000467
468"basicaa" should know how to look through "or" instructions that act like add
469instructions. For example in this code, the x*4+1 is turned into x*4 | 1, and
470basicaa can't analyze the array subscript, leading to duplicated loads in the
471generated code:
472
473void test(int X, int Y, int a[]) {
474int i;
475 for (i=2; i<1000; i+=4) {
476 a[i+0] = a[i-1+0]*a[i-2+0];
477 a[i+1] = a[i-1+1]*a[i-2+1];
478 a[i+2] = a[i-1+2]*a[i-2+2];
479 a[i+3] = a[i-1+3]*a[i-2+3];
480 }
481}
482
Chris Lattnera1643ba2007-12-28 22:30:05 +0000483//===---------------------------------------------------------------------===//
Chris Lattner166a2682007-12-28 04:42:05 +0000484
Chris Lattnera1643ba2007-12-28 22:30:05 +0000485We should investigate an instruction sinking pass. Consider this silly
486example in pic mode:
487
488#include <assert.h>
489void foo(int x) {
490 assert(x);
491 //...
492}
493
494we compile this to:
495_foo:
496 subl $28, %esp
497 call "L1$pb"
498"L1$pb":
499 popl %eax
500 cmpl $0, 32(%esp)
501 je LBB1_2 # cond_true
502LBB1_1: # return
503 # ...
504 addl $28, %esp
505 ret
506LBB1_2: # cond_true
507...
508
509The PIC base computation (call+popl) is only used on one path through the
510code, but is currently always computed in the entry block. It would be
511better to sink the picbase computation down into the block for the
512assertion, as it is the only one that uses it. This happens for a lot of
513code with early outs.
514
Chris Lattner92c06a02007-12-29 01:05:01 +0000515Another example is loads of arguments, which are usually emitted into the
516entry block on targets like x86. If not used in all paths through a
517function, they should be sunk into the ones that do.
518
Chris Lattnera1643ba2007-12-28 22:30:05 +0000519In this case, whole-function-isel would also handle this.
Chris Lattner166a2682007-12-28 04:42:05 +0000520
521//===---------------------------------------------------------------------===//
Chris Lattnerb3041942008-01-07 21:38:14 +0000522
523Investigate lowering of sparse switch statements into perfect hash tables:
524http://burtleburtle.net/bob/hash/perfect.html
525
526//===---------------------------------------------------------------------===//
Chris Lattnerf61b63e2008-01-09 00:17:57 +0000527
528We should turn things like "load+fabs+store" and "load+fneg+store" into the
529corresponding integer operations. On a yonah, this loop:
530
531double a[256];
532 for (b = 0; b < 10000000; b++)
533 for (i = 0; i < 256; i++)
534 a[i] = -a[i];
535
536is twice as slow as this loop:
537
538long long a[256];
539 for (b = 0; b < 10000000; b++)
540 for (i = 0; i < 256; i++)
541 a[i] ^= (1ULL << 63);
542
543and I suspect other processors are similar. On X86 in particular this is a
544big win because doing this with integers allows the use of read/modify/write
545instructions.
546
547//===---------------------------------------------------------------------===//
Chris Lattner83726012008-01-10 18:25:41 +0000548
549DAG Combiner should try to combine small loads into larger loads when
550profitable. For example, we compile this C++ example:
551
552struct THotKey { short Key; bool Control; bool Shift; bool Alt; };
553extern THotKey m_HotKey;
554THotKey GetHotKey () { return m_HotKey; }
555
556into (-O3 -fno-exceptions -static -fomit-frame-pointer):
557
558__Z9GetHotKeyv:
559 pushl %esi
560 movl 8(%esp), %eax
561 movb _m_HotKey+3, %cl
562 movb _m_HotKey+4, %dl
563 movb _m_HotKey+2, %ch
564 movw _m_HotKey, %si
565 movw %si, (%eax)
566 movb %ch, 2(%eax)
567 movb %cl, 3(%eax)
568 movb %dl, 4(%eax)
569 popl %esi
570 ret $4
571
572GCC produces:
573
574__Z9GetHotKeyv:
575 movl _m_HotKey, %edx
576 movl 4(%esp), %eax
577 movl %edx, (%eax)
578 movzwl _m_HotKey+4, %edx
579 movw %dx, 4(%eax)
580 ret $4
581
582The LLVM IR contains the needed alignment info, so we should be able to
583merge the loads and stores into 4-byte loads:
584
585 %struct.THotKey = type { i16, i8, i8, i8 }
586define void @_Z9GetHotKeyv(%struct.THotKey* sret %agg.result) nounwind {
587...
588 %tmp2 = load i16* getelementptr (@m_HotKey, i32 0, i32 0), align 8
589 %tmp5 = load i8* getelementptr (@m_HotKey, i32 0, i32 1), align 2
590 %tmp8 = load i8* getelementptr (@m_HotKey, i32 0, i32 2), align 1
591 %tmp11 = load i8* getelementptr (@m_HotKey, i32 0, i32 3), align 2
592
593Alternatively, we should use a small amount of base-offset alias analysis
594to make it so the scheduler doesn't need to hold all the loads in regs at
595once.
596
597//===---------------------------------------------------------------------===//
Chris Lattner497b7e92008-01-11 06:17:47 +0000598
599We should extend parameter attributes to capture more information about
600pointer parameters for alias analysis. Some ideas:
601
6021. Add a "nocapture" attribute, which indicates that the callee does not store
603 the address of the parameter into a global or any other memory location
604 visible to the callee. This can be used to make basicaa and other analyses
605 more powerful. It is true for things like memcpy, strcat, and many other
606 things, including structs passed by value, most C++ references, etc.
6072. Generalize readonly to be set on parameters. This is important mod/ref
608 info for the function, which is important for basicaa and others. It can
609 also be used by the inliner to avoid inserting a memcpy for byval
610 arguments when the function is inlined.
611
612These functions can be inferred by various analysis passes such as the
613globalsmodrefaa pass.
614
615//===---------------------------------------------------------------------===//