1. 5595cff SLUB: dynamic per-cache MIN_PARTIAL by Pekka Enberg · 16 years ago
  2. 51cc506 SL*B: drop kmem cache argument from constructor by Alexey Dobriyan · 16 years ago
  3. cde5353 Christoph has moved by Christoph Lameter · 16 years ago
  4. 41d54d3 slub: Do not use 192 byte sized cache if minimum alignment is 128 byte by Christoph Lameter · 16 years ago
  5. 65c3376 slub: Fallback to minimal order during slab page allocation by Christoph Lameter · 17 years ago
  6. 205ab99 slub: Update statistics handling for variable order slabs by Christoph Lameter · 17 years ago
  7. 834f3d1 slub: Add kmem_cache_order_objects struct by Christoph Lameter · 17 years ago
  8. 0f389ec slub: No need for per node slab counters if !SLUB_DEBUG by Christoph Lameter · 17 years ago
  9. 6446faa slub: Fix up comments by Christoph Lameter · 17 years ago
  10. 331dc55 slub: Support 4k kmallocs again to compensate for page allocator slowness by Christoph Lameter · 17 years ago
  11. b7a49f0 slub: Determine gfpflags once and not every time a slab is allocated by Christoph Lameter · 17 years ago
  12. eada35e slub: kmalloc page allocator pass-through cleanup by Pekka Enberg · 17 years ago
  13. 8ff12cf SLUB: Support for performance statistics by Christoph Lameter · 17 years ago
  14. da89b79 Explain kmem_cache_cpu fields by Christoph Lameter · 17 years ago
  15. 9824601 SLUB: rename defrag to remote_node_defrag_ratio by Christoph Lameter · 17 years ago
  16. 158a962 Unify /proc/slabinfo configuration by Linus Torvalds · 17 years ago
  17. 57ed3ed slub: provide /proc/slabinfo by Pekka J Enberg · 17 years ago
  18. 4ba9b9d Slab API: remove useless ctor parameter and reorder parameters by Christoph Lameter · 17 years ago
  19. 42a9fdb SLUB: Optimize cacheline use for zeroing by Christoph Lameter · 17 years ago
  20. 4c93c355 SLUB: Place kmem_cache_cpu structures in a NUMA aware way by Christoph Lameter · 17 years ago
  21. b3fba8d SLUB: Move page->offset to kmem_cache_cpu->offset by Christoph Lameter · 17 years ago
  22. dfb4f09 SLUB: Avoid page struct cacheline bouncing due to remote frees to cpu slab by Christoph Lameter · 17 years ago
  23. aadb4bc SLUB: direct pass through of page size or higher kmalloc requests by Christoph Lameter · 17 years ago
  24. aa137f9 SLUB: Force inlining for functions in slub_def.h by Christoph Lameter · 17 years ago
  25. d046943 fix gfp_t annotations for slub by Al Viro · 17 years ago
  26. 81cda66 Slab allocators: Cleanup zeroing allocations by Christoph Lameter · 17 years ago
  27. 0c71001 SLUB: add some more inlines and #ifdef CONFIG_SLUB_DEBUG by Christoph Lameter · 17 years ago
  28. 6cb8f91 Slab allocators: consistent ZERO_SIZE_PTR support and NULL result semantics by Christoph Lameter · 17 years ago
  29. 6193a2f slob: initial NUMA support by Paul Mundt · 17 years ago
  30. 4b356be SLUB: minimum alignment fixes by Christoph Lameter · 17 years ago
  31. 272c1d2 SLUB: return ZERO_SIZE_PTR for kmalloc(0) by Christoph Lameter · 17 years ago
  32. 0aa817f Slab allocators: define common size limitations by Christoph Lameter · 17 years ago
  33. ade3aff slub: fix handling of oversized slabs by Andrew Morton · 17 years ago
  34. c59def9 Slab allocators: Drop support for destructors by Christoph Lameter · 17 years ago
  35. 1abd727 SLUB: It is legit to allocate a slab of the maximum permitted size by Christoph Lameter · 17 years ago
  36. cfbf07f SLUB: CONFIG_LARGE_ALLOCS must consider MAX_ORDER limit by Christoph Lameter · 17 years ago
  37. 643b113 slub: enable tracking of full slabs by Christoph Lameter · 18 years ago
  38. 614410d SLUB: allocate smallest object size if the user asks for 0 bytes by Christoph Lameter · 18 years ago
  39. 81819f0 SLUB core by Christoph Lameter · 18 years ago