Gitiles
Code Review
Sign In
gerrit-public.fairphone.software
/
kernel
/
msm-4.9
/
9757d55652f98836b9a4cac307a01f8b0232dbd9
/
include
/
linux
/
slub_def.h
5595cff
SLUB: dynamic per-cache MIN_PARTIAL
by Pekka Enberg
· 16 years ago
51cc506
SL*B: drop kmem cache argument from constructor
by Alexey Dobriyan
· 16 years ago
cde5353
Christoph has moved
by Christoph Lameter
· 16 years ago
41d54d3
slub: Do not use 192 byte sized cache if minimum alignment is 128 byte
by Christoph Lameter
· 16 years ago
65c3376
slub: Fallback to minimal order during slab page allocation
by Christoph Lameter
· 17 years ago
205ab99
slub: Update statistics handling for variable order slabs
by Christoph Lameter
· 17 years ago
834f3d1
slub: Add kmem_cache_order_objects struct
by Christoph Lameter
· 17 years ago
0f389ec
slub: No need for per node slab counters if !SLUB_DEBUG
by Christoph Lameter
· 17 years ago
6446faa
slub: Fix up comments
by Christoph Lameter
· 17 years ago
331dc55
slub: Support 4k kmallocs again to compensate for page allocator slowness
by Christoph Lameter
· 17 years ago
b7a49f0
slub: Determine gfpflags once and not every time a slab is allocated
by Christoph Lameter
· 17 years ago
eada35e
slub: kmalloc page allocator pass-through cleanup
by Pekka Enberg
· 17 years ago
8ff12cf
SLUB: Support for performance statistics
by Christoph Lameter
· 17 years ago
da89b79
Explain kmem_cache_cpu fields
by Christoph Lameter
· 17 years ago
9824601
SLUB: rename defrag to remote_node_defrag_ratio
by Christoph Lameter
· 17 years ago
158a962
Unify /proc/slabinfo configuration
by Linus Torvalds
· 17 years ago
57ed3ed
slub: provide /proc/slabinfo
by Pekka J Enberg
· 17 years ago
4ba9b9d
Slab API: remove useless ctor parameter and reorder parameters
by Christoph Lameter
· 17 years ago
42a9fdb
SLUB: Optimize cacheline use for zeroing
by Christoph Lameter
· 17 years ago
4c93c355
SLUB: Place kmem_cache_cpu structures in a NUMA aware way
by Christoph Lameter
· 17 years ago
b3fba8d
SLUB: Move page->offset to kmem_cache_cpu->offset
by Christoph Lameter
· 17 years ago
dfb4f09
SLUB: Avoid page struct cacheline bouncing due to remote frees to cpu slab
by Christoph Lameter
· 17 years ago
aadb4bc
SLUB: direct pass through of page size or higher kmalloc requests
by Christoph Lameter
· 17 years ago
aa137f9
SLUB: Force inlining for functions in slub_def.h
by Christoph Lameter
· 17 years ago
d046943
fix gfp_t annotations for slub
by Al Viro
· 17 years ago
81cda66
Slab allocators: Cleanup zeroing allocations
by Christoph Lameter
· 17 years ago
0c71001
SLUB: add some more inlines and #ifdef CONFIG_SLUB_DEBUG
by Christoph Lameter
· 17 years ago
6cb8f91
Slab allocators: consistent ZERO_SIZE_PTR support and NULL result semantics
by Christoph Lameter
· 17 years ago
6193a2f
slob: initial NUMA support
by Paul Mundt
· 17 years ago
4b356be
SLUB: minimum alignment fixes
by Christoph Lameter
· 17 years ago
272c1d2
SLUB: return ZERO_SIZE_PTR for kmalloc(0)
by Christoph Lameter
· 17 years ago
0aa817f
Slab allocators: define common size limitations
by Christoph Lameter
· 18 years ago
ade3aff
slub: fix handling of oversized slabs
by Andrew Morton
· 18 years ago
c59def9
Slab allocators: Drop support for destructors
by Christoph Lameter
· 18 years ago
1abd727
SLUB: It is legit to allocate a slab of the maximum permitted size
by Christoph Lameter
· 18 years ago
cfbf07f
SLUB: CONFIG_LARGE_ALLOCS must consider MAX_ORDER limit
by Christoph Lameter
· 18 years ago
643b113
slub: enable tracking of full slabs
by Christoph Lameter
· 18 years ago
614410d
SLUB: allocate smallest object size if the user asks for 0 bytes
by Christoph Lameter
· 18 years ago
81819f0
SLUB core
by Christoph Lameter
· 18 years ago