1. f70f8d9 pretty print node name by Chris Lattner · 19 years ago
  2. 90564f2 Implement an important entry from README_ALTIVEC: by Chris Lattner · 19 years ago
  3. 3be2905 move some stuff around, clean things up by Chris Lattner · 19 years ago
  4. 993c897 Teach the codegen about instructions used for SSE spill code, allowing it by Chris Lattner · 19 years ago
  5. cea2aa7 Use vmladduhm to do v8i16 multiplies which is faster and simpler than doing by Chris Lattner · 19 years ago
  6. 19a8152 Implement v16i8 multiply with this code: by Chris Lattner · 19 years ago
  7. 4980467 Correct comments by Evan Cheng · 19 years ago
  8. 72dd9bd Lower v8i16 multiply into this code: by Chris Lattner · 19 years ago
  9. e7c768e Custom lower v4i32 multiplies into a cute sequence, instead of having legalize by Chris Lattner · 19 years ago
  10. 74e955d Another entry by Evan Cheng · 19 years ago
  11. 7fa094a Another entry. by Evan Cheng · 19 years ago
  12. cdfc3c8 Use movss to insert_vector_elt(v, s, 0). by Evan Cheng · 19 years ago
  13. fd6bdf0 Turn x86 unaligned load/store intrinsics into aligned load/store instructions by Chris Lattner · 19 years ago
  14. 80edfb3 Fix handling of calls in functions that use vectors. This fixes a crash on by Chris Lattner · 19 years ago
  15. 5edb8d2 Use two pinsrw to insert an element into v4i32 / v4f32 vector. by Evan Cheng · 19 years ago
  16. 22fcbb1 remove done item by Chris Lattner · 19 years ago
  17. f9568d8 Don't diddle VRSAVE if no registers need to be added/removed from it. This by Chris Lattner · 19 years ago
  18. 48d7c06 Add a MachineInstr::eraseFromParent convenience method. by Chris Lattner · 19 years ago
  19. 23b7200 Encoding bug by Evan Cheng · 19 years ago
  20. 402504b Vectors that are known live-in and live-out are clearly already marked in by Chris Lattner · 19 years ago
  21. 939274f Prefer to allocate V2-V5 before V0,V1. This lets us generate code like this: by Chris Lattner · 19 years ago
  22. 369503f Move some knowledge about registers out of the code emitter into the register info. by Chris Lattner · 19 years ago
  23. f7d2372 Use a small table instead of macros to do this conversion. by Chris Lattner · 19 years ago
  24. c575ca2 Implement v8i16, v16i8 splat using unpckl + pshufd. by Evan Cheng · 19 years ago
  25. b2be403 implement returns of a vector, testcase here: CodeGen/X86/vec_return.ll by Chris Lattner · 19 years ago
  26. 8d5a894 Codegen insertelement with constant insertion points as scalar_to_vector by Chris Lattner · 19 years ago
  27. dbce85d Make sure to check splats of every constant we can, handle splat(31) by by Chris Lattner · 19 years ago
  28. 51c9c43 Incorrect foldMemoryOperand entries by Evan Cheng · 19 years ago
  29. 083248e Errors in patterns preventing load folding by Evan Cheng · 19 years ago
  30. 3c280bf Add checks for __OpenBSD__. by Jeff Cohen · 19 years ago
  31. bdd558c Teach the ppc backend to use rol and vsldoi to generate splatted constants. by Chris Lattner · 19 years ago
  32. 966083f add a note by Chris Lattner · 19 years ago
  33. 5001ea1 FP SETOLT, SETOLT, SETUGE, SETUGT conditions were implemented incorrectly by Evan Cheng · 19 years ago
  34. 6876e66 Make some code more general, adding support for constant formation of several by Chris Lattner · 19 years ago
  35. c408382 Learn how to make odd splatted constants in range [17,29]. This implements by Chris Lattner · 19 years ago
  36. 4a998b9 Pull some code out into a helper function. by Chris Lattner · 19 years ago
  37. 5913810 Implement a TODO: for any shuffle that can be viewed as a v4[if]32 shuffle, by Chris Lattner · 19 years ago
  38. cffeb86 Regenerate with adjusted costs by Chris Lattner · 19 years ago
  39. 586d6a8 Regenerate with correct offset by Chris Lattner · 19 years ago
  40. c74e710 Increase the opcodes by one each to disambiguate COPY from VMRGHW. by Chris Lattner · 19 years ago
  41. 6703461 Check in a table, generated by llvm-PerfectShuffle, of optimal shuffles by Chris Lattner · 19 years ago
  42. 06aef15 movduprm, movshduprm bugs by Evan Cheng · 19 years ago
  43. d8e8223 Encoding bugs by Evan Cheng · 19 years ago
  44. 800f12d Can't fold loads into alias vector SSE ops used for scalar operation. The load by Evan Cheng · 19 years ago
  45. f3f69de Implement a TODO: have the legalizer canonicalize a bunch of operations to by Chris Lattner · 19 years ago
  46. 2efce0a Add support for promoting stores from one legal type to another, allowing us by Chris Lattner · 19 years ago
  47. b17f167 Make the BUILD_VECTOR lowering code much more aggressive w.r.t constant vectors. by Chris Lattner · 19 years ago
  48. 7f6cc0c Fix a bug in the 'shuffle(undef,x,mask) -> shuffle(x, undef,mask')' xform by Chris Lattner · 19 years ago
  49. 706126d Canonicalize shuffle(undef,x,mask) -> shuffle(x, undef,mask'). by Chris Lattner · 19 years ago
  50. 730b456 Fix a crash when faced with a shuffle vector that has an undef in its mask. by Chris Lattner · 19 years ago
  51. 6e94af7 Add patterns for matching vnots with bit converted inputs. Most of these will by Chris Lattner · 19 years ago
  52. 1fcee4e Add a new vnot_conv predicate for matching vnot's where the allones vector is by Chris Lattner · 19 years ago
  53. 547a16f Make these predicates return true for bit_convert(buildvector)'s as well as by Chris Lattner · 19 years ago
  54. 60d3fa2 More encoding bugs by Evan Cheng · 19 years ago
  55. 1af1898 pslldrm, psrawrm, etc. encoding bug by Evan Cheng · 19 years ago
  56. 7076e2d hsubp{s|d} encoding bug by Evan Cheng · 19 years ago
  57. 57ebe9f Silly bug by Evan Cheng · 19 years ago
  58. 39fc145 Do not use movs{h|l}dup for a shuffle with a single non-undef node. by Evan Cheng · 19 years ago
  59. efb4735 significant cleanups to code that uses insert/extractelt heavily. This builds by Chris Lattner · 19 years ago
  60. 407428e Added SSE (and other) entries to foldMemoryOperand(). by Evan Cheng · 19 years ago
  61. 9ab1ac5 Some clean up by Evan Cheng · 19 years ago
  62. b097aa9 Allow undef in a shuffle mask by Chris Lattner · 19 years ago
  63. f95670f Move these ctors out of line by Chris Lattner · 19 years ago
  64. d953947 Last few SSE3 intrinsics. by Evan Cheng · 19 years ago
  65. de6df88 Teach scalarrepl to promote unions of vectors and floats, producing by Chris Lattner · 19 years ago
  66. f3e1b1d Misc. SSE2 intrinsics: clflush, lfench, mfence by Evan Cheng · 19 years ago
  67. d9245ca We were not adjusting the frame size to ensure proper alignment when alloca / by Evan Cheng · 19 years ago
  68. 4f51d85 New entry by Evan Cheng · 19 years ago
  69. e25fdaf Don't print out the install command for Intrinsics.gen unless VERBOSE mode. by Reid Spencer · 19 years ago
  70. 3824e50 Make this assertion better by Chris Lattner · 19 years ago
  71. 1a635d6 Move the rest of the PPCTargetLowering::LowerOperation cases out into by Chris Lattner · 19 years ago
  72. f1b4708 Pull the VECTOR_SHUFFLE and BUILD_VECTOR lowering code out into separate by Chris Lattner · 19 years ago
  73. 0fa07f9 Implement value #'ing for vector operations, implementing by Chris Lattner · 19 years ago
  74. bb5c43e pcmpeq* and pcmpgt* intrinsics. by Evan Cheng · 19 years ago
  75. 0ac8ea9 psll*, psrl*, and psra* intrinsics. by Evan Cheng · 19 years ago
  76. 7a1006c Remove the .cvsignore file so this directory can be pruned. by Reid Spencer · 19 years ago
  77. 7095ec1 Remove .cvsignore so that this directory can be pruned. by Reid Spencer · 19 years ago
  78. 99c1942 Handle some kernel code than ends in [0 x sbyte]. I think this is safe by Andrew Lenharth · 19 years ago
  79. 60d07ee Expand some code with temporary variables to rid ourselves of the warning by Reid Spencer · 19 years ago
  80. 2b21ac6 Doh. PANDrm, etc. are not commutable. by Evan Cheng · 19 years ago
  81. a39d798 Force non-darwin targets to use a static relo model. This fixes PR734, by Chris Lattner · 19 years ago
  82. ed93790 add a note, move an altivec todo to the altivec list. by Chris Lattner · 19 years ago
  83. 61e99c9 linear -> constant time by Andrew Lenharth · 19 years ago
  84. 3758552 Add the README files to the distribution. by Reid Spencer · 19 years ago
  85. 0058694 psad, pmax, pmin intrinsics. by Evan Cheng · 19 years ago
  86. 2f40b1b Various SSE2 packed integer intrinsics: pmulhuw, pavgw, etc. by Evan Cheng · 19 years ago
  87. f998984 X86 SSE2 supports v8i16 multiplication by Evan Cheng · 19 years ago
  88. fc7c17a Update by Evan Cheng · 19 years ago
  89. 49ac1bf padds{b|w}, paddus{b|w}, psubs{b|w}, psubus{b|w} intrinsics. by Evan Cheng · 19 years ago
  90. a50a086 Naming inconsistency. by Evan Cheng · 19 years ago
  91. d2a6d54 SSE / SSE2 conversion intrinsics. by Evan Cheng · 19 years ago
  92. 2c3ae37 All "integer" logical ops (pand, por, pxor) are now promoted to v2i64. by Evan Cheng · 19 years ago
  93. cc98761 Promote vector AND, OR, and XOR by Evan Cheng · 19 years ago
  94. 536c006 Make sure CVS versions of yacc and lex files get distributed. by Reid Spencer · 19 years ago
  95. ad20726 Get rid of a signed/unsigned compare warning. by Reid Spencer · 19 years ago
  96. ac225ca Add a new way to match vector constants, which make it easier to bang bits of by Chris Lattner · 19 years ago
  97. 9fb9213 Turn casts into getelementptr's when possible. This enables SROA to be more by Chris Lattner · 19 years ago
  98. 403d43a Don't emit useless warning messages. by Reid Spencer · 19 years ago
  99. e87192a Rename get_VSPLI_elt -> get_VSPLTI_elt by Chris Lattner · 19 years ago
  100. 91b740d Promote v4i32, v8i16, v16i8 load to v2i64 load. by Evan Cheng · 19 years ago