1. 51cc506 SL*B: drop kmem cache argument from constructor by Alexey Dobriyan · 16 years ago
  2. cde5353 Christoph has moved by Christoph Lameter · 16 years ago
  3. 41d54d3 slub: Do not use 192 byte sized cache if minimum alignment is 128 byte by Christoph Lameter · 16 years ago
  4. 65c3376 slub: Fallback to minimal order during slab page allocation by Christoph Lameter · 16 years ago
  5. 205ab99 slub: Update statistics handling for variable order slabs by Christoph Lameter · 16 years ago
  6. 834f3d1 slub: Add kmem_cache_order_objects struct by Christoph Lameter · 16 years ago
  7. 0f389ec slub: No need for per node slab counters if !SLUB_DEBUG by Christoph Lameter · 16 years ago
  8. 6446faa slub: Fix up comments by Christoph Lameter · 16 years ago
  9. 331dc55 slub: Support 4k kmallocs again to compensate for page allocator slowness by Christoph Lameter · 16 years ago
  10. b7a49f0 slub: Determine gfpflags once and not every time a slab is allocated by Christoph Lameter · 16 years ago
  11. eada35e slub: kmalloc page allocator pass-through cleanup by Pekka Enberg · 16 years ago
  12. 8ff12cf SLUB: Support for performance statistics by Christoph Lameter · 16 years ago
  13. da89b79 Explain kmem_cache_cpu fields by Christoph Lameter · 17 years ago
  14. 9824601 SLUB: rename defrag to remote_node_defrag_ratio by Christoph Lameter · 17 years ago
  15. 158a962 Unify /proc/slabinfo configuration by Linus Torvalds · 17 years ago
  16. 57ed3ed slub: provide /proc/slabinfo by Pekka J Enberg · 17 years ago
  17. 4ba9b9d Slab API: remove useless ctor parameter and reorder parameters by Christoph Lameter · 17 years ago
  18. 42a9fdb SLUB: Optimize cacheline use for zeroing by Christoph Lameter · 17 years ago
  19. 4c93c355 SLUB: Place kmem_cache_cpu structures in a NUMA aware way by Christoph Lameter · 17 years ago
  20. b3fba8d SLUB: Move page->offset to kmem_cache_cpu->offset by Christoph Lameter · 17 years ago
  21. dfb4f09 SLUB: Avoid page struct cacheline bouncing due to remote frees to cpu slab by Christoph Lameter · 17 years ago
  22. aadb4bc SLUB: direct pass through of page size or higher kmalloc requests by Christoph Lameter · 17 years ago
  23. aa137f9 SLUB: Force inlining for functions in slub_def.h by Christoph Lameter · 17 years ago
  24. d046943 fix gfp_t annotations for slub by Al Viro · 17 years ago
  25. 81cda66 Slab allocators: Cleanup zeroing allocations by Christoph Lameter · 17 years ago
  26. 0c71001 SLUB: add some more inlines and #ifdef CONFIG_SLUB_DEBUG by Christoph Lameter · 17 years ago
  27. 6cb8f91 Slab allocators: consistent ZERO_SIZE_PTR support and NULL result semantics by Christoph Lameter · 17 years ago
  28. 6193a2f slob: initial NUMA support by Paul Mundt · 17 years ago
  29. 4b356be SLUB: minimum alignment fixes by Christoph Lameter · 17 years ago
  30. 272c1d2 SLUB: return ZERO_SIZE_PTR for kmalloc(0) by Christoph Lameter · 17 years ago
  31. 0aa817f Slab allocators: define common size limitations by Christoph Lameter · 17 years ago
  32. ade3aff slub: fix handling of oversized slabs by Andrew Morton · 17 years ago
  33. c59def9 Slab allocators: Drop support for destructors by Christoph Lameter · 17 years ago
  34. 1abd727 SLUB: It is legit to allocate a slab of the maximum permitted size by Christoph Lameter · 17 years ago
  35. cfbf07f SLUB: CONFIG_LARGE_ALLOCS must consider MAX_ORDER limit by Christoph Lameter · 17 years ago
  36. 643b113 slub: enable tracking of full slabs by Christoph Lameter · 17 years ago
  37. 614410d SLUB: allocate smallest object size if the user asks for 0 bytes by Christoph Lameter · 17 years ago
  38. 81819f0 SLUB core by Christoph Lameter · 17 years ago