- f70f8d9 pretty print node name by Chris Lattner · 19 years ago
- 90564f2 Implement an important entry from README_ALTIVEC: by Chris Lattner · 19 years ago
- 3be2905 move some stuff around, clean things up by Chris Lattner · 19 years ago
- 993c897 Teach the codegen about instructions used for SSE spill code, allowing it by Chris Lattner · 19 years ago
- cea2aa7 Use vmladduhm to do v8i16 multiplies which is faster and simpler than doing by Chris Lattner · 19 years ago
- 19a8152 Implement v16i8 multiply with this code: by Chris Lattner · 19 years ago
- 4980467 Correct comments by Evan Cheng · 19 years ago
- 72dd9bd Lower v8i16 multiply into this code: by Chris Lattner · 19 years ago
- e7c768e Custom lower v4i32 multiplies into a cute sequence, instead of having legalize by Chris Lattner · 19 years ago
- 74e955d Another entry by Evan Cheng · 19 years ago
- 7fa094a Another entry. by Evan Cheng · 19 years ago
- cdfc3c8 Use movss to insert_vector_elt(v, s, 0). by Evan Cheng · 19 years ago
- fd6bdf0 Turn x86 unaligned load/store intrinsics into aligned load/store instructions by Chris Lattner · 19 years ago
- 80edfb3 Fix handling of calls in functions that use vectors. This fixes a crash on by Chris Lattner · 19 years ago
- 5edb8d2 Use two pinsrw to insert an element into v4i32 / v4f32 vector. by Evan Cheng · 19 years ago
- 22fcbb1 remove done item by Chris Lattner · 19 years ago
- f9568d8 Don't diddle VRSAVE if no registers need to be added/removed from it. This by Chris Lattner · 19 years ago
- 48d7c06 Add a MachineInstr::eraseFromParent convenience method. by Chris Lattner · 19 years ago
- 23b7200 Encoding bug by Evan Cheng · 19 years ago
- 402504b Vectors that are known live-in and live-out are clearly already marked in by Chris Lattner · 19 years ago
- 939274f Prefer to allocate V2-V5 before V0,V1. This lets us generate code like this: by Chris Lattner · 19 years ago
- 369503f Move some knowledge about registers out of the code emitter into the register info. by Chris Lattner · 19 years ago
- f7d2372 Use a small table instead of macros to do this conversion. by Chris Lattner · 19 years ago
- c575ca2 Implement v8i16, v16i8 splat using unpckl + pshufd. by Evan Cheng · 19 years ago
- b2be403 implement returns of a vector, testcase here: CodeGen/X86/vec_return.ll by Chris Lattner · 19 years ago
- 8d5a894 Codegen insertelement with constant insertion points as scalar_to_vector by Chris Lattner · 19 years ago
- dbce85d Make sure to check splats of every constant we can, handle splat(31) by by Chris Lattner · 19 years ago
- 51c9c43 Incorrect foldMemoryOperand entries by Evan Cheng · 19 years ago
- 083248e Errors in patterns preventing load folding by Evan Cheng · 19 years ago
- 3c280bf Add checks for __OpenBSD__. by Jeff Cohen · 19 years ago
- bdd558c Teach the ppc backend to use rol and vsldoi to generate splatted constants. by Chris Lattner · 19 years ago
- 966083f add a note by Chris Lattner · 19 years ago
- 5001ea1 FP SETOLT, SETOLT, SETUGE, SETUGT conditions were implemented incorrectly by Evan Cheng · 19 years ago
- 6876e66 Make some code more general, adding support for constant formation of several by Chris Lattner · 19 years ago
- c408382 Learn how to make odd splatted constants in range [17,29]. This implements by Chris Lattner · 19 years ago
- 4a998b9 Pull some code out into a helper function. by Chris Lattner · 19 years ago
- 5913810 Implement a TODO: for any shuffle that can be viewed as a v4[if]32 shuffle, by Chris Lattner · 19 years ago
- cffeb86 Regenerate with adjusted costs by Chris Lattner · 19 years ago
- 586d6a8 Regenerate with correct offset by Chris Lattner · 19 years ago
- c74e710 Increase the opcodes by one each to disambiguate COPY from VMRGHW. by Chris Lattner · 19 years ago
- 6703461 Check in a table, generated by llvm-PerfectShuffle, of optimal shuffles by Chris Lattner · 19 years ago
- 06aef15 movduprm, movshduprm bugs by Evan Cheng · 19 years ago
- d8e8223 Encoding bugs by Evan Cheng · 19 years ago
- 800f12d Can't fold loads into alias vector SSE ops used for scalar operation. The load by Evan Cheng · 19 years ago
- f3f69de Implement a TODO: have the legalizer canonicalize a bunch of operations to by Chris Lattner · 19 years ago
- 2efce0a Add support for promoting stores from one legal type to another, allowing us by Chris Lattner · 19 years ago
- b17f167 Make the BUILD_VECTOR lowering code much more aggressive w.r.t constant vectors. by Chris Lattner · 19 years ago
- 7f6cc0c Fix a bug in the 'shuffle(undef,x,mask) -> shuffle(x, undef,mask')' xform by Chris Lattner · 19 years ago
- 706126d Canonicalize shuffle(undef,x,mask) -> shuffle(x, undef,mask'). by Chris Lattner · 19 years ago
- 730b456 Fix a crash when faced with a shuffle vector that has an undef in its mask. by Chris Lattner · 19 years ago
- 6e94af7 Add patterns for matching vnots with bit converted inputs. Most of these will by Chris Lattner · 19 years ago
- 1fcee4e Add a new vnot_conv predicate for matching vnot's where the allones vector is by Chris Lattner · 19 years ago
- 547a16f Make these predicates return true for bit_convert(buildvector)'s as well as by Chris Lattner · 19 years ago
- 60d3fa2 More encoding bugs by Evan Cheng · 19 years ago
- 1af1898 pslldrm, psrawrm, etc. encoding bug by Evan Cheng · 19 years ago
- 7076e2d hsubp{s|d} encoding bug by Evan Cheng · 19 years ago
- 57ebe9f Silly bug by Evan Cheng · 19 years ago
- 39fc145 Do not use movs{h|l}dup for a shuffle with a single non-undef node. by Evan Cheng · 19 years ago
- efb4735 significant cleanups to code that uses insert/extractelt heavily. This builds by Chris Lattner · 19 years ago
- 407428e Added SSE (and other) entries to foldMemoryOperand(). by Evan Cheng · 19 years ago
- 9ab1ac5 Some clean up by Evan Cheng · 19 years ago
- b097aa9 Allow undef in a shuffle mask by Chris Lattner · 19 years ago
- f95670f Move these ctors out of line by Chris Lattner · 19 years ago
- d953947 Last few SSE3 intrinsics. by Evan Cheng · 19 years ago
- de6df88 Teach scalarrepl to promote unions of vectors and floats, producing by Chris Lattner · 19 years ago
- f3e1b1d Misc. SSE2 intrinsics: clflush, lfench, mfence by Evan Cheng · 19 years ago
- d9245ca We were not adjusting the frame size to ensure proper alignment when alloca / by Evan Cheng · 19 years ago
- 4f51d85 New entry by Evan Cheng · 19 years ago
- e25fdaf Don't print out the install command for Intrinsics.gen unless VERBOSE mode. by Reid Spencer · 19 years ago
- 3824e50 Make this assertion better by Chris Lattner · 19 years ago
- 1a635d6 Move the rest of the PPCTargetLowering::LowerOperation cases out into by Chris Lattner · 19 years ago
- f1b4708 Pull the VECTOR_SHUFFLE and BUILD_VECTOR lowering code out into separate by Chris Lattner · 19 years ago
- 0fa07f9 Implement value #'ing for vector operations, implementing by Chris Lattner · 19 years ago
- bb5c43e pcmpeq* and pcmpgt* intrinsics. by Evan Cheng · 19 years ago
- 0ac8ea9 psll*, psrl*, and psra* intrinsics. by Evan Cheng · 19 years ago
- 7a1006c Remove the .cvsignore file so this directory can be pruned. by Reid Spencer · 19 years ago
- 7095ec1 Remove .cvsignore so that this directory can be pruned. by Reid Spencer · 19 years ago
- 99c1942 Handle some kernel code than ends in [0 x sbyte]. I think this is safe by Andrew Lenharth · 19 years ago
- 60d07ee Expand some code with temporary variables to rid ourselves of the warning by Reid Spencer · 19 years ago
- 2b21ac6 Doh. PANDrm, etc. are not commutable. by Evan Cheng · 19 years ago
- a39d798 Force non-darwin targets to use a static relo model. This fixes PR734, by Chris Lattner · 19 years ago
- ed93790 add a note, move an altivec todo to the altivec list. by Chris Lattner · 19 years ago
- 61e99c9 linear -> constant time by Andrew Lenharth · 19 years ago
- 3758552 Add the README files to the distribution. by Reid Spencer · 19 years ago
- 0058694 psad, pmax, pmin intrinsics. by Evan Cheng · 19 years ago
- 2f40b1b Various SSE2 packed integer intrinsics: pmulhuw, pavgw, etc. by Evan Cheng · 19 years ago
- f998984 X86 SSE2 supports v8i16 multiplication by Evan Cheng · 19 years ago
- fc7c17a Update by Evan Cheng · 19 years ago
- 49ac1bf padds{b|w}, paddus{b|w}, psubs{b|w}, psubus{b|w} intrinsics. by Evan Cheng · 19 years ago
- a50a086 Naming inconsistency. by Evan Cheng · 19 years ago
- d2a6d54 SSE / SSE2 conversion intrinsics. by Evan Cheng · 19 years ago
- 2c3ae37 All "integer" logical ops (pand, por, pxor) are now promoted to v2i64. by Evan Cheng · 19 years ago
- cc98761 Promote vector AND, OR, and XOR by Evan Cheng · 19 years ago
- 536c006 Make sure CVS versions of yacc and lex files get distributed. by Reid Spencer · 19 years ago
- ad20726 Get rid of a signed/unsigned compare warning. by Reid Spencer · 19 years ago
- ac225ca Add a new way to match vector constants, which make it easier to bang bits of by Chris Lattner · 19 years ago
- 9fb9213 Turn casts into getelementptr's when possible. This enables SROA to be more by Chris Lattner · 19 years ago
- 403d43a Don't emit useless warning messages. by Reid Spencer · 19 years ago
- e87192a Rename get_VSPLI_elt -> get_VSPLTI_elt by Chris Lattner · 19 years ago
- 91b740d Promote v4i32, v8i16, v16i8 load to v2i64 load. by Evan Cheng · 19 years ago