1. bdb2192 slub: Fix use-after-preempt of per-CPU data structure by Dmitry Adamushko · 16 years ago
  2. cde5353 Christoph has moved by Christoph Lameter · 16 years ago
  3. 41d54d3 slub: Do not use 192 byte sized cache if minimum alignment is 128 byte by Christoph Lameter · 16 years ago
  4. 15c8b6c on_each_cpu(): kill unused 'retry' parameter by Jens Axboe · 17 years ago
  5. 7699441 slub: ksize() abuse checks by Pekka Enberg · 16 years ago
  6. 4ea33e2 slub: fix atomic usage in any_slab_objects() by Benjamin Herrenschmidt · 17 years ago
  7. f6acb63 slub: #ifdef simplification by Christoph Lameter · 17 years ago
  8. 0121c619 slub: Whitespace cleanup and use of strict_strtoul by Christoph Lameter · 17 years ago
  9. f8bd225 remove div_long_long_rem by Roman Zippel · 17 years ago
  10. 3ac7fe5 infrastructure to debug (dynamic) objects by Thomas Gleixner · 17 years ago
  11. 0c40ba4 ipc: define the slab_memory_callback priority as a constant by Nadia Derbey · 17 years ago
  12. e97e386 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6 by Linus Torvalds · 17 years ago
  13. 1b27d05 mm: move cache_line_size() to <linux/cache.h> by Pekka Enberg · 17 years ago
  14. dd1a239 mm: have zonelist contains structs with both a zone pointer and zone_idx by Mel Gorman · 17 years ago
  15. 54a6eb5 mm: use two zonelist that are filtered by GFP mask by Mel Gorman · 17 years ago
  16. 0e88460 mm: introduce node_zonelist() for accessing the zonelist for a GFP mask by Mel Gorman · 17 years ago
  17. c124f5b slub: pack objects denser by Christoph Lameter · 17 years ago
  18. 9b2cd50 slub: Calculate min_objects based on number of processors. by Christoph Lameter · 17 years ago
  19. 114e9e8 slub: Drop DEFAULT_MAX_ORDER / DEFAULT_MIN_OBJECTS by Christoph Lameter · 17 years ago
  20. 31d33ba slub: Simplify any_slab_object checks by Christoph Lameter · 17 years ago
  21. 06b285d slub: Make the order configurable for each slab cache by Christoph Lameter · 17 years ago
  22. 319d1e2 slub: Drop fallback to page allocator method by Christoph Lameter · 17 years ago
  23. 65c3376 slub: Fallback to minimal order during slab page allocation by Christoph Lameter · 17 years ago
  24. 205ab99 slub: Update statistics handling for variable order slabs by Christoph Lameter · 17 years ago
  25. 834f3d1 slub: Add kmem_cache_order_objects struct by Christoph Lameter · 17 years ago
  26. 224a88b slub: for_each_object must be passed the number of objects in a slab by Christoph Lameter · 17 years ago
  27. 39b2646 slub: Store max number of objects in the page struct. by Christoph Lameter · 17 years ago
  28. 33b12c3 slub: Dump list of objects not freed on kmem_cache_close() by Christoph Lameter · 17 years ago
  29. 599870b slub: free_list() cleanup by Christoph Lameter · 17 years ago
  30. d629d81 slub: improve kmem_cache_destroy() error message by Pekka Enberg · 17 years ago
  31. 3dc5063 slab_err: Pass parameters correctly to slab_bug by Christoph Lameter · 17 years ago
  32. 0f389ec slub: No need for per node slab counters if !SLUB_DEBUG by Christoph Lameter · 17 years ago
  33. 49bd522 slub: Move map/flag clearing to __free_slab by Christoph Lameter · 17 years ago
  34. 50ef37b slub: Fixes to per cpu stat output in sysfs by Christoph Lameter · 17 years ago
  35. 5b06c853 slub: Deal with config variable dependencies by Christoph Lameter · 17 years ago
  36. 4097d60 slub: Reduce #ifdef ZONE_DMA by moving kmalloc_caches_dma near dma logic by Christoph Lameter · 17 years ago
  37. 62f7553 slub: Initialize per-cpu stats by Pekka Enberg · 17 years ago
  38. 00460dd Fix undefined count_partial if !CONFIG_SLABINFO by Christoph Lameter · 17 years ago
  39. e72e9c2 Revert "SLUB: remove useless masking of GFP_ZERO" by Linus Torvalds · 17 years ago
  40. 53625b4 count_partial() is not used if !SLUB_DEBUG and !CONFIG_SLABINFO by Christoph Lameter · 17 years ago
  41. caeab08 slub page alloc fallback: Enable interrupts for GFP_WAIT. by Christoph Lameter · 17 years ago
  42. b621038 slub: Do not cross cacheline boundaries for very small objects by Nick Piggin · 17 years ago
  43. b773ad7 slub statistics: Fix check for DEACTIVATE_REMOTE_FREES by Christoph Lameter · 17 years ago
  44. 62e5c4b slub: fix possible NULL pointer dereference by Cyrill Gorcunov · 17 years ago
  45. f619cfe slub: Add kmalloc_large_node() to support kmalloc_node fallback by Christoph Lameter · 17 years ago
  46. 7693143 slub: look up object from the freelist once by Pekka J Enberg · 17 years ago
  47. 6446faa slub: Fix up comments by Christoph Lameter · 17 years ago
  48. d8b42bf slub: Rearrange #ifdef CONFIG_SLUB_DEBUG in calculate_sizes() by Christoph Lameter · 17 years ago
  49. ae20bfd slub: Remove BUG_ON() from ksize and omit checks for !SLUB_DEBUG by Christoph Lameter · 17 years ago
  50. 27d9e4e slub: Use the objsize from the kmem_cache_cpu structure by Christoph Lameter · 17 years ago
  51. d692ef6 slub: Remove useless checks in alloc_debug_processing by Christoph Lameter · 17 years ago
  52. e153362 slub: Remove objsize check in kmem_cache_flags() by Christoph Lameter · 17 years ago
  53. d9acf4b slub: rename slab_objects to show_slab_objects by Christoph Lameter · 17 years ago
  54. a973e9d Revert "unique end pointer" patch by Christoph Lameter · 17 years ago
  55. 00e962c Revert "SLUB: Alternate fast paths using cmpxchg_local" by Linus Torvalds · 17 years ago
  56. 331dc55 slub: Support 4k kmallocs again to compensate for page allocator slowness by Christoph Lameter · 17 years ago
  57. 71c7a06 slub: Fallback to kmalloc_large for failing higher order allocs by Christoph Lameter · 17 years ago
  58. b7a49f0 slub: Determine gfpflags once and not every time a slab is allocated by Christoph Lameter · 17 years ago
  59. dada123 make slub.c:slab_address() static by Adrian Bunk · 17 years ago
  60. eada35e slub: kmalloc page allocator pass-through cleanup by Pekka Enberg · 17 years ago
  61. 3adbefe SLUB: fix checkpatch warnings by Ingo Molnar · 17 years ago
  62. a76d354 Use non atomic unlock by Nick Piggin · 17 years ago
  63. 8ff12cf SLUB: Support for performance statistics by Christoph Lameter · 17 years ago
  64. 1f84260 SLUB: Alternate fast paths using cmpxchg_local by Christoph Lameter · 17 years ago
  65. 683d0ba SLUB: Use unique end pointer for each slab page. by Christoph Lameter · 17 years ago
  66. 5bb983b SLUB: Deal with annoying gcc warning on kfree() by Christoph Lameter · 17 years ago
  67. ba84c73 SLUB: Do not upset lockdep by root · 17 years ago
  68. 0642878 SLUB: Fix coding style violations by Pekka Enberg · 17 years ago
  69. 7c2e132 Add parameter to add_partial to avoid having two functions by Christoph Lameter · 17 years ago
  70. 9824601 SLUB: rename defrag to remote_node_defrag_ratio by Christoph Lameter · 17 years ago
  71. f61396a Move count_partial before kmem_cache_shrink by Christoph Lameter · 17 years ago
  72. 151c602 SLUB: Fix sysfs refcounting by Christoph Lameter · 17 years ago
  73. e374d48 slub: fix shadowed variable sparse warnings by Harvey Harrison · 17 years ago
  74. 1eada11 Kobject: convert mm/slub.c to use kobject_init/add_ng() by Greg Kroah-Hartman · 17 years ago
  75. 0ff21e4 kobject: convert kernel_kset to be a kobject by Greg Kroah-Hartman · 17 years ago
  76. 081248d kset: move /sys/slab to /sys/kernel/slab by Greg Kroah-Hartman · 17 years ago
  77. 27c3a31 kset: convert slub to use kset_create by Greg Kroah-Hartman · 17 years ago
  78. 3514fac kobject: remove struct kobj_type from struct kset by Greg Kroah-Hartman · 17 years ago
  79. 158a962 Unify /proc/slabinfo configuration by Linus Torvalds · 17 years ago
  80. 57ed3ed slub: provide /proc/slabinfo by Pekka J Enberg · 17 years ago
  81. 76be895 SLUB: Improve hackbench speed by Christoph Lameter · 17 years ago
  82. 3811dbf SLUB: remove useless masking of GFP_ZERO by Christoph Lameter · 17 years ago
  83. 7fd2725 Avoid double memclear() in SLOB/SLUB by Linus Torvalds · 17 years ago
  84. 294a80a SLUB's ksize() fails for size > 2048 by Vegard Nossum · 17 years ago
  85. efe4418 SLUB: killed the unused "end" variable by Denis Cheng · 17 years ago
  86. 05aa345 SLUB: Fix memory leak by not reusing cpu_slab by Christoph Lameter · 17 years ago
  87. 27bb628 missing atomic_read_long() in slub.c by Al Viro · 17 years ago
  88. b9049e2 memory hotplug: make kmem_cache_node for SLUB on memory online avoid panic by Yasunori Goto · 17 years ago
  89. 4ba9b9d Slab API: remove useless ctor parameter and reorder parameters by Christoph Lameter · 17 years ago
  90. b811c20 SLUB: simplify IRQ off handling by Christoph Lameter · 17 years ago
  91. ea3061d slub: list_locations() can use GFP_TEMPORARY by Andrew Morton · 17 years ago
  92. 42a9fdb SLUB: Optimize cacheline use for zeroing by Christoph Lameter · 17 years ago
  93. 4c93c355 SLUB: Place kmem_cache_cpu structures in a NUMA aware way by Christoph Lameter · 17 years ago
  94. ee3c72a SLUB: Avoid touching page struct when freeing to per cpu slab by Christoph Lameter · 17 years ago
  95. b3fba8d SLUB: Move page->offset to kmem_cache_cpu->offset by Christoph Lameter · 17 years ago
  96. 8e65d24 SLUB: Do not use page->mapping by Christoph Lameter · 17 years ago
  97. dfb4f09 SLUB: Avoid page struct cacheline bouncing due to remote frees to cpu slab by Christoph Lameter · 17 years ago
  98. e12ba74 Group short-lived and reclaimable kernel allocations by Mel Gorman · 17 years ago
  99. 6cb0622 Categorize GFP flags by Christoph Lameter · 17 years ago
  100. f64dc58 Memoryless nodes: SLUB support by Christoph Lameter · 17 years ago