diff options
author | Christoph Lameter <cl@linux-foundation.org> | 2009-12-18 16:26:20 -0600 |
---|---|---|
committer | Pekka Enberg <penberg@cs.helsinki.fi> | 2009-12-20 09:29:18 +0200 |
commit | 9dfc6e68bfe6ee452efb1a4e9ca26a9007f2b864 (patch) | |
tree | 40e54f2819e176ceb95b8899265bd48751965c27 /include | |
parent | 55639353a0035052d9ea6cfe4dde0ac7fcbb2c9f (diff) | |
download | kernel_samsung_smdk4412-9dfc6e68bfe6ee452efb1a4e9ca26a9007f2b864.zip kernel_samsung_smdk4412-9dfc6e68bfe6ee452efb1a4e9ca26a9007f2b864.tar.gz kernel_samsung_smdk4412-9dfc6e68bfe6ee452efb1a4e9ca26a9007f2b864.tar.bz2 |
SLUB: Use this_cpu operations in slub
Using per cpu allocations removes the needs for the per cpu arrays in the
kmem_cache struct. These could get quite big if we have to support systems
with thousands of cpus. The use of this_cpu_xx operations results in:
1. The size of kmem_cache for SMP configuration shrinks since we will only
need 1 pointer instead of NR_CPUS. The same pointer can be used by all
processors. Reduces cache footprint of the allocator.
2. We can dynamically size kmem_cache according to the actual nodes in the
system meaning less memory overhead for configurations that may potentially
support up to 1k NUMA nodes / 4k cpus.
3. We can remove the diddle widdle with allocating and releasing of
kmem_cache_cpu structures when bringing up and shutting down cpus. The cpu
alloc logic will do it all for us. Removes some portions of the cpu hotplug
functionality.
4. Fastpath performance increases since per cpu pointer lookups and
address calculations are avoided.
V7-V8
- Convert missed get_cpu_slab() under CONFIG_SLUB_STATS
Signed-off-by: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/slub_def.h | 6 |
1 files changed, 1 insertions, 5 deletions
diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 1e14beb..17ebe0f 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -69,6 +69,7 @@ struct kmem_cache_order_objects { * Slab cache management. */ struct kmem_cache { + struct kmem_cache_cpu *cpu_slab; /* Used for retriving partial slabs etc */ unsigned long flags; int size; /* The size of an object including meta data */ @@ -104,11 +105,6 @@ struct kmem_cache { int remote_node_defrag_ratio; struct kmem_cache_node *node[MAX_NUMNODES]; #endif -#ifdef CONFIG_SMP - struct kmem_cache_cpu *cpu_slab[NR_CPUS]; -#else - struct kmem_cache_cpu cpu_slab; -#endif }; /* |