Commit cad9cd51 authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] slab: cleanups and speedups

- enable the cpu array for all caches

- remove the optimized implementations for quick list access - with
  cpu arrays in all caches, the list access is now rare.

- make the cpu arrays mandatory, this removes 50% of the conditional
  branches from the hot path of kmem_cache_alloc [1]

- poisoning for objects with constructors

Patch got a bit longer...

I forgot to mention this: head arrays mean that some pages can be
blocked due to objects in the head arrays, and not returned to
page_alloc.c.  The current kernel never flushes the head arrays, this
might worsen the behaviour of low memory systems.  The hunk that
flushes the arrays regularly comes next.

Details changelog: [to be read site by side with the patch]

* docu update

* "growing" is not really needed: races between grow and shrink are
  handled by retrying.  [additionally, the current kernel never
  shrinks]

* move the batchcount into the cpu array:
	the old code contained a race during cpu cache tuning:
		update batchcount [in cachep] before or after the IPI?
	And NUMA will need it anyway.

* bootstrap support: the cpu arrays are really mandatory, nothing
  works without them.  Thus a statically allocated cpu array is needed
  to for starting the allocators.

* move the full, partial & free lists into a separate structure, as a
  preparation for NUMA

* structure reorganization: now the cpu arrays are the most important
  part, not the lists.

* dead code elimination: remove "failures", nowhere read.

* dead code elimination: remove "OPTIMIZE": not implemented.  The
  idea is to skip the virt_to_page lookup for caches with on-slab slab
  structures, and use (ptr&PAGE_MASK) instead.  The details are in
  Bonwicks paper.  Not fully implemented.

* remove GROWN: kernel never shrinks a cache, thus grown is
  meaningless.

* bootstrap: starting the slab allocator is now a 3 stage process:
	- nothing works, use the statically allocated cpu arrays.
	- the smallest kmalloc allocator works, use it to allocate
		cpu arrays.
	- all kmalloc allocators work, use the default cpu array size

* register a cpu nodifier callback, and allocate the needed head
  arrays if a new cpu arrives

* always enable head arrays, even for DEBUG builds.  Poisoning and
  red-zoning now happens before an object is added to the arrays.
  Insert enable_all_cpucaches into cpucache_init, there is no need for
  seperate function.

* modifications to the debug checks due to the earlier calls of the
  dtor for caches with poisoning enabled

* poison+ctor is now supported

* squeezing 3 objects into a cacheline is hopeless, the FIXME is not
  solvable and can be removed.

* add additional debug tests: check_irq_off(), check_irq_on(),
  check_spinlock_acquired().

* move do_ccupdate_local nearer to do_tune_cpucache.  Should have
  been part of -04-drain.

* additional objects checks.  red-zoning is tricky: it's implemented
  by increasing the object size by 2*BYTES_PER_WORD.  Thus
  BYTES_PER_WORD must be added to objp before calling the destructor,
  constructor or before returing the object from alloc.  The poison
  functions add BYTES_PER_WORD internally.

* create a flagcheck function, right now the tests are duplicated in
  cache_grow [always] and alloc_debugcheck_before [DEBUG only]

* modify slab list updates: all allocs are now bulk allocs that try
  to get multiple objects at once, update the list pointers only at the
  end of a bulk alloc, not once per alloc.

* might_sleep was moved into kmem_flagcheck.

* major hotpath change:
	- cc always exists, no fallback
	- cache_alloc_refill is called with disabled interrupts,
	  and does everything to recover from an empty cpu array.
	  Far shorter & simpler __cache_alloc [inlined in both
	  kmalloc and kmem_cache_alloc]

* __free_block, free_block, cache_flusharray: main implementation of
  returning objects to the lists.  no big changes, diff lost track.

* new debug check: too early kmalloc or kmem_cache_alloc

* slightly reduce the sizes of the cpu arrays: keep the size < a
  power of 2, including batchcount, avail and now limit, for optimal
  kmalloc memory efficiency.

That's it.  I even found 2 bugs while reading: dtors and ctors for
verify were called with wrong parameters, with RED_ZONE enabled, and
some checks still assumed that POISON and ctor are incompatible.
parent 5bbb9ea6
...@@ -8,6 +8,9 @@ ...@@ -8,6 +8,9 @@
* Major cleanup, different bufctl logic, per-cpu arrays * Major cleanup, different bufctl logic, per-cpu arrays
* (c) 2000 Manfred Spraul * (c) 2000 Manfred Spraul
* *
* Cleanup, make the head arrays unconditional, preparation for NUMA
* (c) 2002 Manfred Spraul
*
* An implementation of the Slab Allocator as described in outline in; * An implementation of the Slab Allocator as described in outline in;
* UNIX Internals: The New Frontiers by Uresh Vahalia * UNIX Internals: The New Frontiers by Uresh Vahalia
* Pub: Prentice Hall ISBN 0-13-101908-2 * Pub: Prentice Hall ISBN 0-13-101908-2
...@@ -16,7 +19,6 @@ ...@@ -16,7 +19,6 @@
* Jeff Bonwick (Sun Microsystems). * Jeff Bonwick (Sun Microsystems).
* Presented at: USENIX Summer 1994 Technical Conference * Presented at: USENIX Summer 1994 Technical Conference
* *
*
* The memory is organized in caches, one cache for each object type. * The memory is organized in caches, one cache for each object type.
* (e.g. inode_cache, dentry_cache, buffer_head, vm_area_struct) * (e.g. inode_cache, dentry_cache, buffer_head, vm_area_struct)
* Each cache consists out of many slabs (they are small (usually one * Each cache consists out of many slabs (they are small (usually one
...@@ -38,12 +40,14 @@ ...@@ -38,12 +40,14 @@
* kmem_cache_destroy() CAN CRASH if you try to allocate from the cache * kmem_cache_destroy() CAN CRASH if you try to allocate from the cache
* during kmem_cache_destroy(). The caller must prevent concurrent allocs. * during kmem_cache_destroy(). The caller must prevent concurrent allocs.
* *
* On SMP systems, each cache has a short per-cpu head array, most allocs * Each cache has a short per-cpu head array, most allocs
* and frees go into that array, and if that array overflows, then 1/2 * and frees go into that array, and if that array overflows, then 1/2
* of the entries in the array are given back into the global cache. * of the entries in the array are given back into the global cache.
* This reduces the number of spinlock operations. * The head array is strictly LIFO and should improve the cache hit rates.
* On SMP, it additionally reduces the spinlock operations.
* *
* The c_cpuarray may not be read with enabled local interrupts. * The c_cpuarray may not be read with enabled local interrupts -
* it's changed with a smp_call_function().
* *
* SMP synchronization: * SMP synchronization:
* constructors and destructors are called without any locking. * constructors and destructors are called without any locking.
...@@ -53,6 +57,10 @@ ...@@ -53,6 +57,10 @@
* and local interrupts are disabled so slab code is preempt-safe. * and local interrupts are disabled so slab code is preempt-safe.
* The non-constant members are protected with a per-cache irq spinlock. * The non-constant members are protected with a per-cache irq spinlock.
* *
* Many thanks to Mark Hemment, who wrote another per-cpu slab patch
* in 2000 - many ideas in the current implementation are derived from
* his patch.
*
* Further notes from the original documentation: * Further notes from the original documentation:
* *
* 11 April '97. Started multi-threading - markhe * 11 April '97. Started multi-threading - markhe
...@@ -61,10 +69,6 @@ ...@@ -61,10 +69,6 @@
* can never happen inside an interrupt (kmem_cache_create(), * can never happen inside an interrupt (kmem_cache_create(),
* kmem_cache_shrink() and kmem_cache_reap()). * kmem_cache_shrink() and kmem_cache_reap()).
* *
* To prevent kmem_cache_shrink() trying to shrink a 'growing' cache (which
* maybe be sleeping and therefore not holding the semaphore/lock), the
* growing field is used. This also prevents reaping from a cache.
*
* At present, each engine can be growing a cache. This should be blocked. * At present, each engine can be growing a cache. This should be blocked.
* *
*/ */
...@@ -77,6 +81,7 @@ ...@@ -77,6 +81,7 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/compiler.h> #include <linux/compiler.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/notifier.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
/* /*
...@@ -170,18 +175,70 @@ typedef struct slab_s { ...@@ -170,18 +175,70 @@ typedef struct slab_s {
* cpucache_t * cpucache_t
* *
* Per cpu structures * Per cpu structures
* Purpose:
* - LIFO ordering, to hand out cache-warm objects from _alloc
* - reduce spinlock operations
*
* The limit is stored in the per-cpu structure to reduce the data cache * The limit is stored in the per-cpu structure to reduce the data cache
* footprint. * footprint.
* On NUMA systems, 2 per-cpu structures exist: one for the current
* node, one for wrong node free calls.
* Memory from the wrong node is never returned by alloc, it's returned
* to the home node as soon as the cpu cache is filled
*
*/ */
typedef struct cpucache_s { typedef struct cpucache_s {
unsigned int avail; unsigned int avail;
unsigned int limit; unsigned int limit;
unsigned int batchcount;
} cpucache_t; } cpucache_t;
/* bootstrap: The caches do not work without cpuarrays anymore,
* but the cpuarrays are allocated from the generic caches...
*/
#define BOOT_CPUCACHE_ENTRIES 1
struct cpucache_int {
cpucache_t cache;
void * entries[BOOT_CPUCACHE_ENTRIES];
};
#define cc_entry(cpucache) \ #define cc_entry(cpucache) \
((void **)(((cpucache_t*)(cpucache))+1)) ((void **)(((cpucache_t*)(cpucache))+1))
#define cc_data(cachep) \ #define cc_data(cachep) \
((cachep)->cpudata[smp_processor_id()]) ((cachep)->cpudata[smp_processor_id()])
/*
* NUMA: check if 'ptr' points into the current node,
* use the alternate cpudata cache if wrong
*/
#define cc_data_ptr(cachep, ptr) \
cc_data(cachep)
/*
* The slab lists of all objects.
* Hopefully reduce the internal fragmentation
* NUMA: The spinlock could be moved from the kmem_cache_t
* into this structure, too. Figure out what causes
* fewer cross-node spinlock operations.
*/
struct kmem_list3 {
struct list_head slabs_partial; /* partial list first, better asm code */
struct list_head slabs_full;
struct list_head slabs_free;
};
#define LIST3_INIT(parent) \
{ \
.slabs_full = LIST_HEAD_INIT(parent.slabs_full), \
.slabs_partial = LIST_HEAD_INIT(parent.slabs_partial), \
.slabs_free = LIST_HEAD_INIT(parent.slabs_free) \
}
#define list3_data(cachep) \
(&(cachep)->lists)
/* NUMA: per-node */
#define list3_data_ptr(cachep, ptr) \
list3_data(cachep)
/* /*
* kmem_cache_t * kmem_cache_t
* *
...@@ -189,18 +246,20 @@ typedef struct cpucache_s { ...@@ -189,18 +246,20 @@ typedef struct cpucache_s {
*/ */
struct kmem_cache_s { struct kmem_cache_s {
/* 1) each alloc & free */ /* 1) per-cpu data, touched during every alloc/free */
/* full, partial first, then free */ cpucache_t *cpudata[NR_CPUS];
struct list_head slabs_full; /* NUMA: cpucache_t *cpudata_othernode[NR_CPUS]; */
struct list_head slabs_partial; unsigned int batchcount;
struct list_head slabs_free; unsigned int limit;
/* 2) touched by every alloc & free from the backend */
struct kmem_list3 lists;
/* NUMA: kmem_3list_t *nodelists[NR_NODES] */
unsigned int objsize; unsigned int objsize;
unsigned int flags; /* constant flags */ unsigned int flags; /* constant flags */
unsigned int num; /* # of objs per slab */ unsigned int num; /* # of objs per slab */
spinlock_t spinlock; spinlock_t spinlock;
unsigned int batchcount;
/* 2) slab additions /removals */ /* 3) cache_grow/shrink */
/* order of pgs per slab (2^n) */ /* order of pgs per slab (2^n) */
unsigned int gfporder; unsigned int gfporder;
...@@ -211,7 +270,6 @@ struct kmem_cache_s { ...@@ -211,7 +270,6 @@ struct kmem_cache_s {
unsigned int colour_off; /* colour offset */ unsigned int colour_off; /* colour offset */
unsigned int colour_next; /* cache colouring */ unsigned int colour_next; /* cache colouring */
kmem_cache_t *slabp_cache; kmem_cache_t *slabp_cache;
unsigned int growing;
unsigned int dflags; /* dynamic flags */ unsigned int dflags; /* dynamic flags */
/* constructor func */ /* constructor func */
...@@ -220,13 +278,11 @@ struct kmem_cache_s { ...@@ -220,13 +278,11 @@ struct kmem_cache_s {
/* de-constructor func */ /* de-constructor func */
void (*dtor)(void *, kmem_cache_t *, unsigned long); void (*dtor)(void *, kmem_cache_t *, unsigned long);
unsigned long failures; /* 4) cache creation/removal */
/* 3) cache creation/removal */
const char *name; const char *name;
struct list_head next; struct list_head next;
/* 4) per-cpu data */
cpucache_t *cpudata[NR_CPUS]; /* 5) statistics */
#if STATS #if STATS
unsigned long num_active; unsigned long num_active;
unsigned long num_allocations; unsigned long num_allocations;
...@@ -243,14 +299,8 @@ struct kmem_cache_s { ...@@ -243,14 +299,8 @@ struct kmem_cache_s {
/* internal c_flags */ /* internal c_flags */
#define CFLGS_OFF_SLAB 0x010000UL /* slab management in own cache */ #define CFLGS_OFF_SLAB 0x010000UL /* slab management in own cache */
#define CFLGS_OPTIMIZE 0x020000UL /* optimized slab lookup */
/* c_dflags (dynamic flags). Need to hold the spinlock to access this member */
#define DFLGS_GROWN 0x000001UL /* don't reap a recently grown */
#define OFF_SLAB(x) ((x)->flags & CFLGS_OFF_SLAB) #define OFF_SLAB(x) ((x)->flags & CFLGS_OFF_SLAB)
#define OPTIMIZE(x) ((x)->flags & CFLGS_OPTIMIZE)
#define GROWN(x) ((x)->dlags & DFLGS_GROWN)
#if STATS #if STATS
#define STATS_INC_ACTIVE(x) ((x)->num_active++) #define STATS_INC_ACTIVE(x) ((x)->num_active++)
...@@ -376,11 +426,15 @@ static struct { ...@@ -376,11 +426,15 @@ static struct {
}; };
#undef CN #undef CN
struct cpucache_int cpuarray_cache __initdata = { { 0, BOOT_CPUCACHE_ENTRIES, 1} };
struct cpucache_int cpuarray_generic __initdata = { { 0, BOOT_CPUCACHE_ENTRIES, 1} };
/* internal cache of cache description objs */ /* internal cache of cache description objs */
static kmem_cache_t cache_cache = { static kmem_cache_t cache_cache = {
.slabs_full = LIST_HEAD_INIT(cache_cache.slabs_full), .lists = LIST3_INIT(cache_cache.lists),
.slabs_partial = LIST_HEAD_INIT(cache_cache.slabs_partial), .cpudata = { [0] = &cpuarray_cache.cache },
.slabs_free = LIST_HEAD_INIT(cache_cache.slabs_free), .batchcount = 1,
.limit = BOOT_CPUCACHE_ENTRIES,
.objsize = sizeof(kmem_cache_t), .objsize = sizeof(kmem_cache_t),
.flags = SLAB_NO_REAP, .flags = SLAB_NO_REAP,
.spinlock = SPIN_LOCK_UNLOCKED, .spinlock = SPIN_LOCK_UNLOCKED,
...@@ -400,10 +454,13 @@ static kmem_cache_t *clock_searchp = &cache_cache; ...@@ -400,10 +454,13 @@ static kmem_cache_t *clock_searchp = &cache_cache;
* chicken and egg problem: delay the per-cpu array allocation * chicken and egg problem: delay the per-cpu array allocation
* until the general caches are up. * until the general caches are up.
*/ */
static int g_cpucache_up; enum {
NONE,
PARTIAL,
FULL
} g_cpucache_up;
static void enable_cpucache (kmem_cache_t *cachep); static void enable_cpucache (kmem_cache_t *cachep);
static void enable_all_cpucaches (void);
/* Cal the num objs, wastage, and bytes left over for a given slab size. */ /* Cal the num objs, wastage, and bytes left over for a given slab size. */
static void cache_estimate (unsigned long gfporder, size_t size, static void cache_estimate (unsigned long gfporder, size_t size,
...@@ -433,6 +490,54 @@ static void cache_estimate (unsigned long gfporder, size_t size, ...@@ -433,6 +490,54 @@ static void cache_estimate (unsigned long gfporder, size_t size,
*left_over = wastage; *left_over = wastage;
} }
#ifdef CONFIG_SMP
/*
* Note: if someone calls kmem_cache_alloc() on the new
* cpu before the cpuup callback had a chance to allocate
* the head arrays, it will oops.
* Is CPU_ONLINE early enough?
*/
static int __devinit cpuup_callback(struct notifier_block *nfb,
unsigned long action,
void *hcpu)
{
int cpu = (int)hcpu;
if (action == CPU_ONLINE) {
struct list_head *p;
cpucache_t *nc;
down(&cache_chain_sem);
p = &cache_cache.next;
do {
int memsize;
kmem_cache_t* cachep = list_entry(p, kmem_cache_t, next);
memsize = sizeof(void*)*cachep->limit+sizeof(cpucache_t);
nc = kmalloc(memsize, GFP_KERNEL);
if (!nc)
goto bad;
nc->avail = 0;
nc->limit = cachep->limit;
nc->batchcount = cachep->batchcount;
cachep->cpudata[cpu] = nc;
p = cachep->next.next;
} while (p != &cache_cache.next);
up(&cache_chain_sem);
}
return NOTIFY_OK;
bad:
up(&cache_chain_sem);
return NOTIFY_BAD;
}
static struct notifier_block cpucache_notifier = { &cpuup_callback, NULL, 0 };
#endif
/* Initialisation - setup the `cache' cache. */ /* Initialisation - setup the `cache' cache. */
void __init kmem_cache_init(void) void __init kmem_cache_init(void)
{ {
...@@ -448,6 +553,13 @@ void __init kmem_cache_init(void) ...@@ -448,6 +553,13 @@ void __init kmem_cache_init(void)
cache_cache.colour = left_over/cache_cache.colour_off; cache_cache.colour = left_over/cache_cache.colour_off;
cache_cache.colour_next = 0; cache_cache.colour_next = 0;
#ifdef CONFIG_SMP
/* Register a cpu startup notifier callback
* that initializes cc_data for all new cpus
*/
register_cpu_notifier(&cpucache_notifier);
#endif
} }
...@@ -489,12 +601,46 @@ void __init kmem_cache_sizes_init(void) ...@@ -489,12 +601,46 @@ void __init kmem_cache_sizes_init(void)
BUG(); BUG();
sizes++; sizes++;
} while (sizes->cs_size); } while (sizes->cs_size);
/*
* The generic caches are running - time to kick out the
* bootstrap cpucaches.
*/
{
void * ptr;
ptr = kmalloc(sizeof(struct cpucache_int), GFP_KERNEL);
local_irq_disable();
BUG_ON(cc_data(&cache_cache) != &cpuarray_cache.cache);
memcpy(ptr, cc_data(&cache_cache), sizeof(struct cpucache_int));
cc_data(&cache_cache) = ptr;
local_irq_enable();
ptr = kmalloc(sizeof(struct cpucache_int), GFP_KERNEL);
local_irq_disable();
BUG_ON(cc_data(cache_sizes[0].cs_cachep) != &cpuarray_generic.cache);
memcpy(ptr, cc_data(cache_sizes[0].cs_cachep),
sizeof(struct cpucache_int));
cc_data(cache_sizes[0].cs_cachep) = ptr;
local_irq_enable();
}
} }
int __init cpucache_init(void) int __init cpucache_init(void)
{ {
g_cpucache_up = 1; struct list_head* p;
enable_all_cpucaches();
down(&cache_chain_sem);
g_cpucache_up = FULL;
p = &cache_cache.next;
do {
kmem_cache_t* cachep = list_entry(p, kmem_cache_t, next);
enable_cpucache(cachep);
p = cachep->next.next;
} while (p != &cache_cache.next);
up(&cache_chain_sem);
return 0; return 0;
} }
...@@ -574,36 +720,33 @@ static inline int check_poison_obj (kmem_cache_t *cachep, void *addr) ...@@ -574,36 +720,33 @@ static inline int check_poison_obj (kmem_cache_t *cachep, void *addr)
*/ */
static void slab_destroy (kmem_cache_t *cachep, slab_t *slabp) static void slab_destroy (kmem_cache_t *cachep, slab_t *slabp)
{ {
if (cachep->dtor
#if DEBUG #if DEBUG
|| cachep->flags & (SLAB_POISON | SLAB_RED_ZONE)
#endif
) {
int i; int i;
for (i = 0; i < cachep->num; i++) { for (i = 0; i < cachep->num; i++) {
void* objp = slabp->s_mem+cachep->objsize*i; void* objp = slabp->s_mem+cachep->objsize*i;
#if DEBUG if (cachep->flags & SLAB_POISON)
check_poison_obj(cachep, objp);
if (cachep->flags & SLAB_RED_ZONE) { if (cachep->flags & SLAB_RED_ZONE) {
if (*((unsigned long*)(objp)) != RED_MAGIC1) if (*((unsigned long*)(objp)) != RED_MAGIC1)
BUG(); BUG();
if (*((unsigned long*)(objp + cachep->objsize if (*((unsigned long*)(objp + cachep->objsize -
-BYTES_PER_WORD)) != RED_MAGIC1) BYTES_PER_WORD)) != RED_MAGIC1)
BUG(); BUG();
objp += BYTES_PER_WORD; objp += BYTES_PER_WORD;
} }
#endif if (cachep->dtor && !(cachep->flags & SLAB_POISON))
if (cachep->dtor)
(cachep->dtor)(objp, cachep, 0); (cachep->dtor)(objp, cachep, 0);
#if DEBUG
if (cachep->flags & SLAB_RED_ZONE) {
objp -= BYTES_PER_WORD;
} }
if ((cachep->flags & SLAB_POISON) && #else
check_poison_obj(cachep, objp)) if (cachep->dtor) {
BUG(); int i;
#endif for (i = 0; i < cachep->num; i++) {
void* objp = slabp->s_mem+cachep->objsize*i;
(cachep->dtor)(objp, cachep, 0);
} }
} }
#endif
kmem_freepages(cachep, slabp->s_mem-slabp->colouroff); kmem_freepages(cachep, slabp->s_mem-slabp->colouroff);
if (OFF_SLAB(cachep)) if (OFF_SLAB(cachep))
...@@ -670,11 +813,6 @@ kmem_cache_create (const char *name, size_t size, size_t offset, ...@@ -670,11 +813,6 @@ kmem_cache_create (const char *name, size_t size, size_t offset,
flags &= ~SLAB_DEBUG_INITIAL; flags &= ~SLAB_DEBUG_INITIAL;
} }
if ((flags & SLAB_POISON) && ctor) {
/* request for poisoning, but we can't do that with a constructor */
printk("%sPoisoning requested, but con given - %s\n", func_nm, name);
flags &= ~SLAB_POISON;
}
#if FORCED_DEBUG #if FORCED_DEBUG
if ((size < (PAGE_SIZE>>3)) && !(flags & SLAB_MUST_HWCACHE_ALIGN)) if ((size < (PAGE_SIZE>>3)) && !(flags & SLAB_MUST_HWCACHE_ALIGN))
/* /*
...@@ -682,7 +820,6 @@ kmem_cache_create (const char *name, size_t size, size_t offset, ...@@ -682,7 +820,6 @@ kmem_cache_create (const char *name, size_t size, size_t offset,
* fragmentation. * fragmentation.
*/ */
flags |= SLAB_RED_ZONE; flags |= SLAB_RED_ZONE;
if (!ctor)
flags |= SLAB_POISON; flags |= SLAB_POISON;
#endif #endif
#endif #endif
...@@ -735,7 +872,6 @@ kmem_cache_create (const char *name, size_t size, size_t offset, ...@@ -735,7 +872,6 @@ kmem_cache_create (const char *name, size_t size, size_t offset,
if (flags & SLAB_HWCACHE_ALIGN) { if (flags & SLAB_HWCACHE_ALIGN) {
/* Need to adjust size so that objs are cache aligned. */ /* Need to adjust size so that objs are cache aligned. */
/* Small obj size, can get at least two per cache line. */ /* Small obj size, can get at least two per cache line. */
/* FIXME: only power of 2 supported, was better */
while (size < align/2) while (size < align/2)
align /= 2; align /= 2;
size = (size+align-1)&(~(align-1)); size = (size+align-1)&(~(align-1));
...@@ -802,19 +938,16 @@ kmem_cache_create (const char *name, size_t size, size_t offset, ...@@ -802,19 +938,16 @@ kmem_cache_create (const char *name, size_t size, size_t offset,
cachep->colour_off = offset; cachep->colour_off = offset;
cachep->colour = left_over/offset; cachep->colour = left_over/offset;
/* init remaining fields */
if (!cachep->gfporder && !(flags & CFLGS_OFF_SLAB))
flags |= CFLGS_OPTIMIZE;
cachep->flags = flags; cachep->flags = flags;
cachep->gfpflags = 0; cachep->gfpflags = 0;
if (flags & SLAB_CACHE_DMA) if (flags & SLAB_CACHE_DMA)
cachep->gfpflags |= GFP_DMA; cachep->gfpflags |= GFP_DMA;
spin_lock_init(&cachep->spinlock); spin_lock_init(&cachep->spinlock);
cachep->objsize = size; cachep->objsize = size;
INIT_LIST_HEAD(&cachep->slabs_full); /* NUMA */
INIT_LIST_HEAD(&cachep->slabs_partial); INIT_LIST_HEAD(&cachep->lists.slabs_full);
INIT_LIST_HEAD(&cachep->slabs_free); INIT_LIST_HEAD(&cachep->lists.slabs_partial);
INIT_LIST_HEAD(&cachep->lists.slabs_free);
if (flags & CFLGS_OFF_SLAB) if (flags & CFLGS_OFF_SLAB)
cachep->slabp_cache = kmem_find_general_cachep(slab_size,0); cachep->slabp_cache = kmem_find_general_cachep(slab_size,0);
...@@ -822,8 +955,27 @@ kmem_cache_create (const char *name, size_t size, size_t offset, ...@@ -822,8 +955,27 @@ kmem_cache_create (const char *name, size_t size, size_t offset,
cachep->dtor = dtor; cachep->dtor = dtor;
cachep->name = name; cachep->name = name;
if (g_cpucache_up) if (g_cpucache_up == FULL) {
enable_cpucache(cachep); enable_cpucache(cachep);
} else {
if (g_cpucache_up == NONE) {
/* Note: the first kmem_cache_create must create
* the cache that's used by kmalloc(24), otherwise
* the creation of further caches will BUG().
*/
cc_data(cachep) = &cpuarray_generic.cache;
g_cpucache_up = PARTIAL;
} else {
cc_data(cachep) = kmalloc(sizeof(struct cpucache_int),GFP_KERNEL);
}
BUG_ON(!cc_data(cachep));
cc_data(cachep)->avail = 0;
cc_data(cachep)->limit = BOOT_CPUCACHE_ENTRIES;
cc_data(cachep)->batchcount = 1;
cachep->batchcount = 1;
cachep->limit = BOOT_CPUCACHE_ENTRIES;
}
/* Need the semaphore to access the chain. */ /* Need the semaphore to access the chain. */
down(&cache_chain_sem); down(&cache_chain_sem);
{ {
...@@ -861,32 +1013,27 @@ kmem_cache_create (const char *name, size_t size, size_t offset, ...@@ -861,32 +1013,27 @@ kmem_cache_create (const char *name, size_t size, size_t offset,
return cachep; return cachep;
} }
static inline void check_irq_off(void)
#if DEBUG
/*
* This check if the kmem_cache_t pointer is chained in the cache_cache
* list. -arca
*/
static int is_chained_cache(kmem_cache_t * cachep)
{ {
struct list_head *p; #if DEBUG
int ret = 0; BUG_ON(!irqs_disabled());
#endif
/* Find the cache in the chain of caches. */ }
down(&cache_chain_sem);
list_for_each(p, &cache_chain) {
if (p == &cachep->next) {
ret = 1;
break;
}
}
up(&cache_chain_sem);
return ret; static inline void check_irq_on(void)
{
#if DEBUG
BUG_ON(irqs_disabled());
#endif
} }
#else
#define is_chained_cache(x) 1 static inline void check_spinlock_acquired(kmem_cache_t *cachep)
{
#ifdef CONFIG_SMP
check_irq_off();
BUG_ON(spin_trylock(&cachep->spinlock));
#endif #endif
}
/* /*
* Waits for all CPUs to execute func(). * Waits for all CPUs to execute func().
...@@ -900,20 +1047,6 @@ static void smp_call_function_all_cpus(void (*func) (void *arg), void *arg) ...@@ -900,20 +1047,6 @@ static void smp_call_function_all_cpus(void (*func) (void *arg), void *arg)
if (smp_call_function(func, arg, 1, 1)) if (smp_call_function(func, arg, 1, 1))
BUG(); BUG();
} }
typedef struct ccupdate_struct_s
{
kmem_cache_t *cachep;
cpucache_t *new[NR_CPUS];
} ccupdate_struct_t;
static void do_ccupdate_local(void *info)
{
ccupdate_struct_t *new = (ccupdate_struct_t *)info;
cpucache_t *old = cc_data(new->cachep);
cc_data(new->cachep) = new->new[smp_processor_id()];
new->new[smp_processor_id()] = old;
}
static void free_block (kmem_cache_t* cachep, void** objpp, int len); static void free_block (kmem_cache_t* cachep, void** objpp, int len);
...@@ -922,6 +1055,7 @@ static void do_drain(void *arg) ...@@ -922,6 +1055,7 @@ static void do_drain(void *arg)
kmem_cache_t *cachep = (kmem_cache_t*)arg; kmem_cache_t *cachep = (kmem_cache_t*)arg;
cpucache_t *cc; cpucache_t *cc;
check_irq_off();
cc = cc_data(cachep); cc = cc_data(cachep);
free_block(cachep, &cc_entry(cc)[0], cc->avail); free_block(cachep, &cc_entry(cc)[0], cc->avail);
cc->avail = 0; cc->avail = 0;
...@@ -932,6 +1066,8 @@ static void drain_cpu_caches(kmem_cache_t *cachep) ...@@ -932,6 +1066,8 @@ static void drain_cpu_caches(kmem_cache_t *cachep)
smp_call_function_all_cpus(do_drain, cachep); smp_call_function_all_cpus(do_drain, cachep);
} }
/* NUMA shrink all list3s */
static int __cache_shrink(kmem_cache_t *cachep) static int __cache_shrink(kmem_cache_t *cachep)
{ {
slab_t *slabp; slab_t *slabp;
...@@ -939,17 +1075,17 @@ static int __cache_shrink(kmem_cache_t *cachep) ...@@ -939,17 +1075,17 @@ static int __cache_shrink(kmem_cache_t *cachep)
drain_cpu_caches(cachep); drain_cpu_caches(cachep);
check_irq_on();
spin_lock_irq(&cachep->spinlock); spin_lock_irq(&cachep->spinlock);
/* If the cache is growing, stop shrinking. */ for(;;) {
while (!cachep->growing) {
struct list_head *p; struct list_head *p;
p = cachep->slabs_free.prev; p = cachep->lists.slabs_free.prev;
if (p == &cachep->slabs_free) if (p == &cachep->lists.slabs_free)
break; break;
slabp = list_entry(cachep->slabs_free.prev, slab_t, list); slabp = list_entry(cachep->lists.slabs_free.prev, slab_t, list);
#if DEBUG #if DEBUG
if (slabp->inuse) if (slabp->inuse)
BUG(); BUG();
...@@ -960,7 +1096,8 @@ static int __cache_shrink(kmem_cache_t *cachep) ...@@ -960,7 +1096,8 @@ static int __cache_shrink(kmem_cache_t *cachep)
slab_destroy(cachep, slabp); slab_destroy(cachep, slabp);
spin_lock_irq(&cachep->spinlock); spin_lock_irq(&cachep->spinlock);
} }
ret = !list_empty(&cachep->slabs_full) || !list_empty(&cachep->slabs_partial); ret = !list_empty(&cachep->lists.slabs_full) ||
!list_empty(&cachep->lists.slabs_partial);
spin_unlock_irq(&cachep->spinlock); spin_unlock_irq(&cachep->spinlock);
return ret; return ret;
} }
...@@ -974,7 +1111,7 @@ static int __cache_shrink(kmem_cache_t *cachep) ...@@ -974,7 +1111,7 @@ static int __cache_shrink(kmem_cache_t *cachep)
*/ */
int kmem_cache_shrink(kmem_cache_t *cachep) int kmem_cache_shrink(kmem_cache_t *cachep)
{ {
if (!cachep || in_interrupt() || !is_chained_cache(cachep)) if (!cachep || in_interrupt())
BUG(); BUG();
return __cache_shrink(cachep); return __cache_shrink(cachep);
...@@ -997,7 +1134,7 @@ int kmem_cache_shrink(kmem_cache_t *cachep) ...@@ -997,7 +1134,7 @@ int kmem_cache_shrink(kmem_cache_t *cachep)
*/ */
int kmem_cache_destroy (kmem_cache_t * cachep) int kmem_cache_destroy (kmem_cache_t * cachep)
{ {
if (!cachep || in_interrupt() || cachep->growing) if (!cachep || in_interrupt())
BUG(); BUG();
/* Find the cache in the chain of caches. */ /* Find the cache in the chain of caches. */
...@@ -1021,6 +1158,7 @@ int kmem_cache_destroy (kmem_cache_t * cachep) ...@@ -1021,6 +1158,7 @@ int kmem_cache_destroy (kmem_cache_t * cachep)
int i; int i;
for (i = 0; i < NR_CPUS; i++) for (i = 0; i < NR_CPUS; i++)
kfree(cachep->cpudata[i]); kfree(cachep->cpudata[i]);
/* NUMA: free the list3 structures */
} }
kmem_cache_free(&cache_cache, cachep); kmem_cache_free(&cache_cache, cachep);
...@@ -1039,10 +1177,6 @@ static inline slab_t * alloc_slabmgmt (kmem_cache_t *cachep, ...@@ -1039,10 +1177,6 @@ static inline slab_t * alloc_slabmgmt (kmem_cache_t *cachep,
if (!slabp) if (!slabp)
return NULL; return NULL;
} else { } else {
/* FIXME: change to
slabp = objp
* if you enable OPTIMIZE
*/
slabp = objp+colour_off; slabp = objp+colour_off;
colour_off += L1_CACHE_ALIGN(cachep->num * colour_off += L1_CACHE_ALIGN(cachep->num *
sizeof(kmem_bufctl_t) + sizeof(slab_t)); sizeof(kmem_bufctl_t) + sizeof(slab_t));
...@@ -1062,34 +1196,35 @@ static inline void cache_init_objs (kmem_cache_t * cachep, ...@@ -1062,34 +1196,35 @@ static inline void cache_init_objs (kmem_cache_t * cachep,
for (i = 0; i < cachep->num; i++) { for (i = 0; i < cachep->num; i++) {
void* objp = slabp->s_mem+cachep->objsize*i; void* objp = slabp->s_mem+cachep->objsize*i;
#if DEBUG #if DEBUG
/* need to poison the objs? */
if (cachep->flags & SLAB_POISON)
poison_obj(cachep, objp);
if (cachep->flags & SLAB_RED_ZONE) { if (cachep->flags & SLAB_RED_ZONE) {
*((unsigned long*)(objp)) = RED_MAGIC1; *((unsigned long*)(objp)) = RED_MAGIC1;
*((unsigned long*)(objp + cachep->objsize - *((unsigned long*)(objp + cachep->objsize -
BYTES_PER_WORD)) = RED_MAGIC1; BYTES_PER_WORD)) = RED_MAGIC1;
objp += BYTES_PER_WORD; objp += BYTES_PER_WORD;
} }
#endif
/* /*
* Constructors are not allowed to allocate memory from * Constructors are not allowed to allocate memory from
* the same cache which they are a constructor for. * the same cache which they are a constructor for.
* Otherwise, deadlock. They must also be threaded. * Otherwise, deadlock. They must also be threaded.
*/ */
if (cachep->ctor) if (cachep->ctor && !(cachep->flags & SLAB_POISON))
cachep->ctor(objp, cachep, ctor_flags); cachep->ctor(objp, cachep, ctor_flags);
#if DEBUG
if (cachep->flags & SLAB_RED_ZONE)
objp -= BYTES_PER_WORD;
if (cachep->flags & SLAB_POISON)
/* need to poison the objs */
poison_obj(cachep, objp);
if (cachep->flags & SLAB_RED_ZONE) { if (cachep->flags & SLAB_RED_ZONE) {
objp -= BYTES_PER_WORD;
if (*((unsigned long*)(objp)) != RED_MAGIC1) if (*((unsigned long*)(objp)) != RED_MAGIC1)
BUG(); BUG();
if (*((unsigned long*)(objp + cachep->objsize - if (*((unsigned long*)(objp + cachep->objsize -
BYTES_PER_WORD)) != RED_MAGIC1) BYTES_PER_WORD)) != RED_MAGIC1)
BUG(); BUG();
} }
#else
if (cachep->ctor)
cachep->ctor(objp, cachep, ctor_flags);
#endif #endif
slab_bufctl(slabp)[i] = i+1; slab_bufctl(slabp)[i] = i+1;
} }
...@@ -1097,6 +1232,20 @@ static inline void cache_init_objs (kmem_cache_t * cachep, ...@@ -1097,6 +1232,20 @@ static inline void cache_init_objs (kmem_cache_t * cachep,
slabp->free = 0; slabp->free = 0;
} }
static void kmem_flagcheck(kmem_cache_t *cachep, int flags)
{
if (flags & __GFP_WAIT)
might_sleep();
if (flags & SLAB_DMA) {
if (!(cachep->gfpflags & GFP_DMA))
BUG();
} else {
if (cachep->gfpflags & GFP_DMA)
BUG();
}
}
/* /*
* Grow (by 1) the number of slabs within a cache. This is called by * Grow (by 1) the number of slabs within a cache. This is called by
* kmem_cache_alloc() when there are no active objs left in a cache. * kmem_cache_alloc() when there are no active objs left in a cache.
...@@ -1109,7 +1258,6 @@ static int cache_grow (kmem_cache_t * cachep, int flags) ...@@ -1109,7 +1258,6 @@ static int cache_grow (kmem_cache_t * cachep, int flags)
size_t offset; size_t offset;
unsigned int i, local_flags; unsigned int i, local_flags;
unsigned long ctor_flags; unsigned long ctor_flags;
unsigned long save_flags;
/* Be lazy and only check for valid flags here, /* Be lazy and only check for valid flags here,
* keeping it out of the critical path in kmem_cache_alloc(). * keeping it out of the critical path in kmem_cache_alloc().
...@@ -1119,15 +1267,6 @@ static int cache_grow (kmem_cache_t * cachep, int flags) ...@@ -1119,15 +1267,6 @@ static int cache_grow (kmem_cache_t * cachep, int flags)
if (flags & SLAB_NO_GROW) if (flags & SLAB_NO_GROW)
return 0; return 0;
/*
* The test for missing atomic flag is performed here, rather than
* the more obvious place, simply to reduce the critical path length
* in kmem_cache_alloc(). If a caller is seriously mis-behaving they
* will eventually be caught here (where it matters).
*/
if (in_interrupt() && (flags & __GFP_WAIT))
BUG();
ctor_flags = SLAB_CTOR_CONSTRUCTOR; ctor_flags = SLAB_CTOR_CONSTRUCTOR;
local_flags = (flags & SLAB_LEVEL_MASK); local_flags = (flags & SLAB_LEVEL_MASK);
if (!(local_flags & __GFP_WAIT)) if (!(local_flags & __GFP_WAIT))
...@@ -1138,7 +1277,8 @@ static int cache_grow (kmem_cache_t * cachep, int flags) ...@@ -1138,7 +1277,8 @@ static int cache_grow (kmem_cache_t * cachep, int flags)
ctor_flags |= SLAB_CTOR_ATOMIC; ctor_flags |= SLAB_CTOR_ATOMIC;
/* About to mess with non-constant members - lock. */ /* About to mess with non-constant members - lock. */
spin_lock_irqsave(&cachep->spinlock, save_flags); check_irq_off();
spin_lock(&cachep->spinlock);
/* Get colour for the slab, and cal the next value. */ /* Get colour for the slab, and cal the next value. */
offset = cachep->colour_next; offset = cachep->colour_next;
...@@ -1146,19 +1286,20 @@ static int cache_grow (kmem_cache_t * cachep, int flags) ...@@ -1146,19 +1286,20 @@ static int cache_grow (kmem_cache_t * cachep, int flags)
if (cachep->colour_next >= cachep->colour) if (cachep->colour_next >= cachep->colour)
cachep->colour_next = 0; cachep->colour_next = 0;
offset *= cachep->colour_off; offset *= cachep->colour_off;
cachep->dflags |= DFLGS_GROWN;
cachep->growing++; spin_unlock(&cachep->spinlock);
spin_unlock_irqrestore(&cachep->spinlock, save_flags);
if (local_flags & __GFP_WAIT)
local_irq_enable();
/* A series of memory allocations for a new slab. /*
* Neither the cache-chain semaphore, or cache-lock, are * The test for missing atomic flag is performed here, rather than
* held, but the incrementing c_growing prevents this * the more obvious place, simply to reduce the critical path length
* cache from being reaped or shrunk. * in kmem_cache_alloc(). If a caller is seriously mis-behaving they
* Note: The cache could be selected in for reaping in * will eventually be caught here (where it matters).
* cache_reap(), but when the final test is made the
* growing value will be seen.
*/ */
kmem_flagcheck(cachep, flags);
/* Get mem for the objs. */ /* Get mem for the objs. */
if (!(objp = kmem_getpages(cachep, flags))) if (!(objp = kmem_getpages(cachep, flags)))
...@@ -1181,62 +1322,117 @@ static int cache_grow (kmem_cache_t * cachep, int flags) ...@@ -1181,62 +1322,117 @@ static int cache_grow (kmem_cache_t * cachep, int flags)
cache_init_objs(cachep, slabp, ctor_flags); cache_init_objs(cachep, slabp, ctor_flags);
spin_lock_irqsave(&cachep->spinlock, save_flags); if (local_flags & __GFP_WAIT)
cachep->growing--; local_irq_disable();
check_irq_off();
spin_lock(&cachep->spinlock);
/* Make slab active. */ /* Make slab active. */
list_add_tail(&slabp->list, &cachep->slabs_free); list_add_tail(&slabp->list, &(list3_data(cachep)->slabs_free));
STATS_INC_GROWN(cachep); STATS_INC_GROWN(cachep);
cachep->failures = 0; spin_unlock(&cachep->spinlock);
spin_unlock_irqrestore(&cachep->spinlock, save_flags);
return 1; return 1;
opps1: opps1:
kmem_freepages(cachep, objp); kmem_freepages(cachep, objp);
failed: failed:
spin_lock_irqsave(&cachep->spinlock, save_flags);
cachep->growing--;
spin_unlock_irqrestore(&cachep->spinlock, save_flags);
return 0; return 0;
} }
/* /*
* Perform extra freeing checks: * Perform extra freeing checks:
* - detect double free
* - detect bad pointers. * - detect bad pointers.
* Called with the cache-lock held. * - POISON/RED_ZONE checking
* - destructor calls, for caches with POISON+dtor
*/ */
static inline void kfree_debugcheck(const void *objp)
#if DEBUG
static int extra_free_checks (kmem_cache_t * cachep,
slab_t *slabp, void * objp)
{ {
int i; #if DEBUG
unsigned int objnr = (objp-slabp->s_mem)/cachep->objsize; struct page *page;
if (objnr >= cachep->num) if (!virt_addr_valid(objp)) {
BUG(); printk(KERN_ERR "kfree_debugcheck: out of range ptr %lxh.\n",
if (objp != slabp->s_mem + objnr*cachep->objsize) (unsigned long)objp);
BUG(); BUG();
}
/* Check slab's freelist to see if this obj is there. */ page = virt_to_page(objp);
for (i = slabp->free; i != BUFCTL_END; i = slab_bufctl(slabp)[i]) { if (!PageSlab(page)) {
if (i == objnr) printk(KERN_ERR "kfree_debugcheck: bad ptr %lxh.\n", (unsigned long)objp);
BUG(); BUG();
} }
return 0;
}
#endif #endif
}
static inline void cache_alloc_head(kmem_cache_t *cachep, int flags) static inline void *cache_free_debugcheck (kmem_cache_t * cachep, void * objp)
{ {
if (flags & SLAB_DMA) { #if DEBUG
if (!(cachep->gfpflags & GFP_DMA)) struct page *page;
unsigned int objnr;
slab_t *slabp;
kfree_debugcheck(objp);
page = virt_to_page(objp);
BUG_ON(GET_PAGE_CACHE(page) != cachep);
slabp = GET_PAGE_SLAB(page);
if (cachep->flags & SLAB_RED_ZONE) {
objp -= BYTES_PER_WORD;
if (xchg((unsigned long *)objp, RED_MAGIC1) != RED_MAGIC2)
/* Either write before start, or a double free. */
BUG(); BUG();
} else { if (xchg((unsigned long *)(objp+cachep->objsize -
if (cachep->gfpflags & GFP_DMA) BYTES_PER_WORD), RED_MAGIC1) != RED_MAGIC2)
/* Either write past end, or a double free. */
BUG(); BUG();
} }
objnr = (objp-slabp->s_mem)/cachep->objsize;
BUG_ON(objnr >= cachep->num);
BUG_ON(objp != slabp->s_mem + objnr*cachep->objsize);
if (cachep->flags & SLAB_DEBUG_INITIAL) {
/* Need to call the slab's constructor so the
* caller can perform a verify of its state (debugging).
* Called without the cache-lock held.
*/
if (cachep->flags & SLAB_RED_ZONE) {
cachep->ctor(objp+BYTES_PER_WORD,
cachep, SLAB_CTOR_CONSTRUCTOR|SLAB_CTOR_VERIFY);
} else {
cachep->ctor(objp, cachep, SLAB_CTOR_CONSTRUCTOR|SLAB_CTOR_VERIFY);
}
}
if (cachep->flags & SLAB_POISON && cachep->dtor) {
/* we want to cache poison the object,
* call the destruction callback
*/
if (cachep->flags & SLAB_RED_ZONE)
cachep->dtor(objp+BYTES_PER_WORD, cachep, 0);
else
cachep->dtor(objp, cachep, 0);
}
if (cachep->flags & SLAB_POISON) {
poison_obj(cachep, objp);
}
#endif
return objp;
}
static inline void check_slabp(kmem_cache_t *cachep, slab_t *slabp)
{
#if DEBUG
int i;
int entries = 0;
check_spinlock_acquired(cachep);
/* Check slab's freelist to see if this obj is there. */
for (i = slabp->free; i != BUFCTL_END; i = slab_bufctl(slabp)[i]) {
entries++;
BUG_ON(entries > cachep->num);
}
BUG_ON(entries != cachep->num - slabp->inuse);
#endif
} }
static inline void * cache_alloc_one_tail (kmem_cache_t *cachep, static inline void * cache_alloc_one_tail (kmem_cache_t *cachep,
...@@ -1244,6 +1440,8 @@ static inline void * cache_alloc_one_tail (kmem_cache_t *cachep, ...@@ -1244,6 +1440,8 @@ static inline void * cache_alloc_one_tail (kmem_cache_t *cachep,
{ {
void *objp; void *objp;
check_spinlock_acquired(cachep);
STATS_INC_ALLOCED(cachep); STATS_INC_ALLOCED(cachep);
STATS_INC_ACTIVE(cachep); STATS_INC_ACTIVE(cachep);
STATS_SET_HIGH(cachep); STATS_SET_HIGH(cachep);
...@@ -1253,11 +1451,83 @@ static inline void * cache_alloc_one_tail (kmem_cache_t *cachep, ...@@ -1253,11 +1451,83 @@ static inline void * cache_alloc_one_tail (kmem_cache_t *cachep,
objp = slabp->s_mem + slabp->free*cachep->objsize; objp = slabp->s_mem + slabp->free*cachep->objsize;
slabp->free=slab_bufctl(slabp)[slabp->free]; slabp->free=slab_bufctl(slabp)[slabp->free];
if (unlikely(slabp->free == BUFCTL_END)) { return objp;
}
static inline void cache_alloc_listfixup(struct kmem_list3 *l3, slab_t *slabp)
{
list_del(&slabp->list); list_del(&slabp->list);
list_add(&slabp->list, &cachep->slabs_full); if (slabp->free == BUFCTL_END) {
list_add(&slabp->list, &l3->slabs_full);
} else {
list_add(&slabp->list, &l3->slabs_partial);
}
}
static void* cache_alloc_refill(kmem_cache_t* cachep, int flags)
{
int batchcount;
struct kmem_list3 *l3;
cpucache_t *cc;
check_irq_off();
cc = cc_data(cachep);
retry:
batchcount = cc->batchcount;
l3 = list3_data(cachep);
BUG_ON(cc->avail > 0);
spin_lock(&cachep->spinlock);
while (batchcount > 0) {
struct list_head *entry;
slab_t *slabp;
/* Get slab alloc is to come from. */
entry = l3->slabs_partial.next;
if (entry == &l3->slabs_partial) {
entry = l3->slabs_free.next;
if (entry == &l3->slabs_free)
goto must_grow;
} }
slabp = list_entry(entry, slab_t, list);
check_slabp(cachep, slabp);
while (slabp->inuse < cachep->num && batchcount--)
cc_entry(cc)[cc->avail++] =
cache_alloc_one_tail(cachep, slabp);
check_slabp(cachep, slabp);
cache_alloc_listfixup(l3, slabp);
}
must_grow:
spin_unlock(&cachep->spinlock);
if (unlikely(!cc->avail)) {
int x;
x = cache_grow(cachep, flags);
// cache_grow can reenable interrupts, then cc could change.
cc = cc_data(cachep);
if (!x && cc->avail == 0) // no objects in sight? abort
return NULL;
if (!cc->avail) // objects refilled by interrupt?
goto retry;
}
return cc_entry(cc)[--cc->avail];
}
static inline void cache_alloc_debugcheck_before(kmem_cache_t *cachep, int flags)
{
#if DEBUG
kmem_flagcheck(cachep, flags);
#endif
}
static inline void *cache_alloc_debugcheck_after (kmem_cache_t *cachep, unsigned long flags, void *objp)
{
#if DEBUG #if DEBUG
if (!objp)
return objp;
if (cachep->flags & SLAB_POISON) if (cachep->flags & SLAB_POISON)
if (check_poison_obj(cachep, objp)) if (check_poison_obj(cachep, objp))
BUG(); BUG();
...@@ -1271,181 +1541,58 @@ static inline void * cache_alloc_one_tail (kmem_cache_t *cachep, ...@@ -1271,181 +1541,58 @@ static inline void * cache_alloc_one_tail (kmem_cache_t *cachep,
BUG(); BUG();
objp += BYTES_PER_WORD; objp += BYTES_PER_WORD;
} }
#endif if (cachep->ctor && cachep->flags & SLAB_POISON) {
return objp; unsigned long ctor_flags = SLAB_CTOR_CONSTRUCTOR;
}
/* if (!flags & __GFP_WAIT)
* Returns a ptr to an obj in the given cache. ctor_flags |= SLAB_CTOR_ATOMIC;
* caller must guarantee synchronization
* #define for the goto optimization 8-)
*/
#define cache_alloc_one(cachep) \
({ \
struct list_head * slabs_partial, * entry; \
slab_t *slabp; \
\
slabs_partial = &(cachep)->slabs_partial; \
entry = slabs_partial->next; \
if (unlikely(entry == slabs_partial)) { \
struct list_head * slabs_free; \
slabs_free = &(cachep)->slabs_free; \
entry = slabs_free->next; \
if (unlikely(entry == slabs_free)) \
goto alloc_new_slab; \
list_del(entry); \
list_add(entry, slabs_partial); \
} \
\
slabp = list_entry(entry, slab_t, list); \
cache_alloc_one_tail(cachep, slabp); \
})
void* cache_alloc_batch(kmem_cache_t* cachep, int flags)
{
int batchcount = cachep->batchcount;
cpucache_t* cc = cc_data(cachep);
spin_lock(&cachep->spinlock);
while (batchcount--) {
struct list_head * slabs_partial, * entry;
slab_t *slabp;
/* Get slab alloc is to come from. */
slabs_partial = &(cachep)->slabs_partial;
entry = slabs_partial->next;
if (unlikely(entry == slabs_partial)) {
struct list_head * slabs_free;
slabs_free = &(cachep)->slabs_free;
entry = slabs_free->next;
if (unlikely(entry == slabs_free))
break;
list_del(entry);
list_add(entry, slabs_partial);
}
slabp = list_entry(entry, slab_t, list); cachep->ctor(objp, cachep, ctor_flags);
cc_entry(cc)[cc->avail++] =
cache_alloc_one_tail(cachep, slabp);
} }
spin_unlock(&cachep->spinlock); #endif
return objp;
if (cc->avail)
return cc_entry(cc)[--cc->avail];
return NULL;
} }
static inline void * __cache_alloc (kmem_cache_t *cachep, int flags) static inline void * __cache_alloc (kmem_cache_t *cachep, int flags)
{ {
unsigned long save_flags; unsigned long save_flags;
void* objp; void* objp;
cpucache_t *cc;
if (flags & __GFP_WAIT) cache_alloc_debugcheck_before(cachep, flags);
might_sleep();
cache_alloc_head(cachep, flags);
try_again:
local_irq_save(save_flags); local_irq_save(save_flags);
{ cc = cc_data(cachep);
cpucache_t *cc = cc_data(cachep); if (likely(cc->avail)) {
if (cc) {
if (cc->avail) {
STATS_INC_ALLOCHIT(cachep); STATS_INC_ALLOCHIT(cachep);
objp = cc_entry(cc)[--cc->avail]; objp = cc_entry(cc)[--cc->avail];
} else { } else {
STATS_INC_ALLOCMISS(cachep); STATS_INC_ALLOCMISS(cachep);
objp = cache_alloc_batch(cachep,flags); objp = cache_alloc_refill(cachep, flags);
local_irq_restore(save_flags);
if (!objp)
goto alloc_new_slab_nolock;
return objp;
}
} else {
spin_lock(&cachep->spinlock);
objp = cache_alloc_one(cachep);
spin_unlock(&cachep->spinlock);
}
} }
local_irq_restore(save_flags); local_irq_restore(save_flags);
objp = cache_alloc_debugcheck_after(cachep, flags, objp);
return objp; return objp;
alloc_new_slab:
spin_unlock(&cachep->spinlock);
local_irq_restore(save_flags);
alloc_new_slab_nolock:
if (cache_grow(cachep, flags))
/* Someone may have stolen our objs. Doesn't matter, we'll
* just come back here again.
*/
goto try_again;
return NULL;
} }
/* /*
* Release an obj back to its cache. If the obj has a constructed * NUMA: different approach needed if the spinlock is moved into
* state, it should be in this state _before_ it is released. * the l3 structure
* - caller is responsible for the synchronization
*/ */
#if DEBUG static inline void __free_block (kmem_cache_t* cachep, void** objpp, int len)
# define CHECK_NR(pg) \
do { \
if (!virt_addr_valid(pg)) { \
printk(KERN_ERR "kfree: out of range ptr %lxh.\n", \
(unsigned long)objp); \
BUG(); \
} \
} while (0)
# define CHECK_PAGE(addr) \
do { \
struct page *page = virt_to_page(addr); \
CHECK_NR(addr); \
if (!PageSlab(page)) { \
printk(KERN_ERR "kfree: bad ptr %lxh.\n", \
(unsigned long)objp); \
BUG(); \
} \
} while (0)
#else
# define CHECK_PAGE(pg) do { } while (0)
#endif
static inline void cache_free_one(kmem_cache_t *cachep, void *objp)
{ {
check_irq_off();
spin_lock(&cachep->spinlock);
/* NUMA: move add into loop */
for ( ; len > 0; len--, objpp++) {
slab_t* slabp; slab_t* slabp;
void *objp = *objpp;
CHECK_PAGE(objp);
/* reduces memory footprint
*
if (OPTIMIZE(cachep))
slabp = (void*)((unsigned long)objp&(~(PAGE_SIZE-1)));
else
*/
slabp = GET_PAGE_SLAB(virt_to_page(objp)); slabp = GET_PAGE_SLAB(virt_to_page(objp));
list_del(&slabp->list);
#if DEBUG
if (cachep->flags & SLAB_DEBUG_INITIAL)
/* Need to call the slab's constructor so the
* caller can perform a verify of its state (debugging).
* Called without the cache-lock held.
*/
cachep->ctor(objp, cachep, SLAB_CTOR_CONSTRUCTOR|SLAB_CTOR_VERIFY);
if (cachep->flags & SLAB_RED_ZONE) {
objp -= BYTES_PER_WORD;
if (xchg((unsigned long *)objp, RED_MAGIC1) != RED_MAGIC2)
/* Either write before start, or a double free. */
BUG();
if (xchg((unsigned long *)(objp+cachep->objsize -
BYTES_PER_WORD), RED_MAGIC1) != RED_MAGIC2)
/* Either write past end, or a double free. */
BUG();
}
if (cachep->flags & SLAB_POISON)
poison_obj(cachep, objp);
if (extra_free_checks(cachep, slabp, objp))
return;
#endif
{ {
unsigned int objnr = (objp-slabp->s_mem)/cachep->objsize; unsigned int objnr = (objp-slabp->s_mem)/cachep->objsize;
...@@ -1455,62 +1602,67 @@ static inline void cache_free_one(kmem_cache_t *cachep, void *objp) ...@@ -1455,62 +1602,67 @@ static inline void cache_free_one(kmem_cache_t *cachep, void *objp)
STATS_DEC_ACTIVE(cachep); STATS_DEC_ACTIVE(cachep);
/* fixup slab chains */ /* fixup slab chains */
{
int inuse = slabp->inuse;
if (unlikely(!--slabp->inuse)) { if (unlikely(!--slabp->inuse)) {
/* Was partial or full, now empty. */ if (list_empty(&list3_data_ptr(cachep, objp)->slabs_free)) {
list_del(&slabp->list);
/* We only buffer a single page */
if (list_empty(&cachep->slabs_free))
list_add(&slabp->list, &cachep->slabs_free);
else
slab_destroy(cachep, slabp); slab_destroy(cachep, slabp);
} else if (unlikely(inuse == cachep->num)) { } else {
/* Was full. */ list_add(&slabp->list,
list_del(&slabp->list); &list3_data_ptr(cachep, objp)->slabs_free);
list_add_tail(&slabp->list, &cachep->slabs_partial);
} }
} else {
/* Unconditionally move a slab to the end of the
* partial list on free - maximum time for the
* other objects to be freed, too.
*/
list_add_tail(&slabp->list, &list3_data_ptr(cachep, objp)->slabs_partial);
} }
}
spin_unlock(&cachep->spinlock);
} }
static inline void __free_block (kmem_cache_t* cachep, static void free_block(kmem_cache_t* cachep, void** objpp, int len)
void** objpp, int len)
{ {
for ( ; len > 0; len--, objpp++) __free_block(cachep, objpp, len);
cache_free_one(cachep, *objpp);
} }
static void free_block (kmem_cache_t* cachep, void** objpp, int len) static void cache_flusharray (kmem_cache_t* cachep, cpucache_t *cc)
{ {
spin_lock(&cachep->spinlock); int batchcount;
__free_block(cachep, objpp, len);
spin_unlock(&cachep->spinlock); batchcount = cc->batchcount;
#if DEBUG
BUG_ON(!batchcount || batchcount > cc->avail);
#endif
check_irq_off();
__free_block(cachep, &cc_entry(cc)[0], batchcount);
cc->avail -= batchcount;
memmove(&cc_entry(cc)[0], &cc_entry(cc)[batchcount],
sizeof(void*)*cc->avail);
} }
/* /*
* __cache_free * __cache_free
* called with disabled ints * Release an obj back to its cache. If the obj has a constructed
* state, it must be in this state _before_ it is released.
*
* Called with disabled ints.
*/ */
static inline void __cache_free (kmem_cache_t *cachep, void* objp) static inline void __cache_free (kmem_cache_t *cachep, void* objp)
{ {
cpucache_t *cc = cc_data(cachep); cpucache_t *cc = cc_data_ptr(cachep, objp);
CHECK_PAGE(objp); check_irq_off();
if (cc) { objp = cache_free_debugcheck(cachep, objp);
int batchcount;
if (cc->avail < cc->limit) { if (likely(cc->avail < cc->limit)) {
STATS_INC_FREEHIT(cachep); STATS_INC_FREEHIT(cachep);
cc_entry(cc)[cc->avail++] = objp; cc_entry(cc)[cc->avail++] = objp;
return; return;
} } else {
STATS_INC_FREEMISS(cachep); STATS_INC_FREEMISS(cachep);
batchcount = cachep->batchcount; cache_flusharray(cachep, cc);
cc->avail -= batchcount;
free_block(cachep, &cc_entry(cc)[cc->avail], batchcount);
cc_entry(cc)[cc->avail++] = objp; cc_entry(cc)[cc->avail++] = objp;
return;
} else {
free_block(cachep, &objp, 1);
} }
} }
...@@ -1555,6 +1707,13 @@ void * kmalloc (size_t size, int flags) ...@@ -1555,6 +1707,13 @@ void * kmalloc (size_t size, int flags)
for (; csizep->cs_size; csizep++) { for (; csizep->cs_size; csizep++) {
if (size > csizep->cs_size) if (size > csizep->cs_size)
continue; continue;
#if DEBUG
/* This happens if someone tries to call
* kmem_cache_create(), or kmalloc(), before
* the generic caches are initialized.
*/
BUG_ON(csizep->cs_cachep == NULL);
#endif
return __cache_alloc(flags & GFP_DMA ? return __cache_alloc(flags & GFP_DMA ?
csizep->cs_dmacachep : csizep->cs_cachep, flags); csizep->cs_dmacachep : csizep->cs_cachep, flags);
} }
...@@ -1572,11 +1731,6 @@ void * kmalloc (size_t size, int flags) ...@@ -1572,11 +1731,6 @@ void * kmalloc (size_t size, int flags)
void kmem_cache_free (kmem_cache_t *cachep, void *objp) void kmem_cache_free (kmem_cache_t *cachep, void *objp)
{ {
unsigned long flags; unsigned long flags;
#if DEBUG
CHECK_PAGE(objp);
if (cachep != GET_PAGE_CACHE(virt_to_page(objp)))
BUG();
#endif
local_irq_save(flags); local_irq_save(flags);
__cache_free(cachep, objp); __cache_free(cachep, objp);
...@@ -1598,7 +1752,7 @@ void kfree (const void *objp) ...@@ -1598,7 +1752,7 @@ void kfree (const void *objp)
if (!objp) if (!objp)
return; return;
local_irq_save(flags); local_irq_save(flags);
CHECK_PAGE(objp); kfree_debugcheck(objp);
c = GET_PAGE_CACHE(virt_to_page(objp)); c = GET_PAGE_CACHE(virt_to_page(objp));
__cache_free(c, (void*)objp); __cache_free(c, (void*)objp);
local_irq_restore(flags); local_irq_restore(flags);
...@@ -1629,26 +1783,30 @@ kmem_cache_t * kmem_find_general_cachep (size_t size, int gfpflags) ...@@ -1629,26 +1783,30 @@ kmem_cache_t * kmem_find_general_cachep (size_t size, int gfpflags)
return (gfpflags & GFP_DMA) ? csizep->cs_dmacachep : csizep->cs_cachep; return (gfpflags & GFP_DMA) ? csizep->cs_dmacachep : csizep->cs_cachep;
} }
/* called with cache_chain_sem acquired. */ struct ccupdate_struct {
static int tune_cpucache (kmem_cache_t* cachep, int limit, int batchcount) kmem_cache_t *cachep;
cpucache_t *new[NR_CPUS];
};
static void do_ccupdate_local(void *info)
{ {
ccupdate_struct_t new; struct ccupdate_struct *new = (struct ccupdate_struct *)info;
int i; cpucache_t *old;
/* check_irq_off();
* These are admin-provided, so we are more graceful. old = cc_data(new->cachep);
*/
if (limit < 0) cc_data(new->cachep) = new->new[smp_processor_id()];
return -EINVAL; new->new[smp_processor_id()] = old;
if (batchcount < 0) }
return -EINVAL;
if (batchcount > limit)
return -EINVAL; static int do_tune_cpucache (kmem_cache_t* cachep, int limit, int batchcount)
if (limit != 0 && !batchcount) {
return -EINVAL; struct ccupdate_struct new;
int i;
memset(&new.new,0,sizeof(new.new)); memset(&new.new,0,sizeof(new.new));
if (limit) {
for (i = 0; i < NR_CPUS; i++) { for (i = 0; i < NR_CPUS; i++) {
cpucache_t* ccnew; cpucache_t* ccnew;
...@@ -1658,18 +1816,21 @@ static int tune_cpucache (kmem_cache_t* cachep, int limit, int batchcount) ...@@ -1658,18 +1816,21 @@ static int tune_cpucache (kmem_cache_t* cachep, int limit, int batchcount)
for (i--; i >= 0; i--) kfree(new.new[i]); for (i--; i >= 0; i--) kfree(new.new[i]);
return -ENOMEM; return -ENOMEM;
} }
ccnew->limit = limit;
ccnew->avail = 0; ccnew->avail = 0;
ccnew->limit = limit;
ccnew->batchcount = batchcount;
new.new[i] = ccnew; new.new[i] = ccnew;
} }
}
new.cachep = cachep; new.cachep = cachep;
smp_call_function_all_cpus(do_ccupdate_local, (void *)&new);
check_irq_on();
spin_lock_irq(&cachep->spinlock); spin_lock_irq(&cachep->spinlock);
cachep->batchcount = batchcount; cachep->batchcount = batchcount;
cachep->limit = limit;
spin_unlock_irq(&cachep->spinlock); spin_unlock_irq(&cachep->spinlock);
smp_call_function_all_cpus(do_ccupdate_local, (void *)&new);
for (i = 0; i < NR_CPUS; i++) { for (i = 0; i < NR_CPUS; i++) {
cpucache_t* ccold = new.new[i]; cpucache_t* ccold = new.new[i];
if (!ccold) if (!ccold)
...@@ -1682,48 +1843,25 @@ static int tune_cpucache (kmem_cache_t* cachep, int limit, int batchcount) ...@@ -1682,48 +1843,25 @@ static int tune_cpucache (kmem_cache_t* cachep, int limit, int batchcount)
return 0; return 0;
} }
/*
* If slab debugging is enabled, don't batch slabs
* on the per-cpu lists by defaults.
*/
static void enable_cpucache (kmem_cache_t *cachep) static void enable_cpucache (kmem_cache_t *cachep)
{ {
#ifndef CONFIG_DEBUG_SLAB
int err; int err;
int limit; int limit;
/* FIXME: optimize */
if (cachep->objsize > PAGE_SIZE) if (cachep->objsize > PAGE_SIZE)
return; limit = 8;
if (cachep->objsize > 1024) else if (cachep->objsize > 1024)
limit = 60; limit = 54;
else if (cachep->objsize > 256) else if (cachep->objsize > 256)
limit = 124; limit = 120;
else else
limit = 252; limit = 248;
err = tune_cpucache(cachep, limit, limit/2); err = do_tune_cpucache(cachep, limit, limit/2);
if (err) if (err)
printk(KERN_ERR "enable_cpucache failed for %s, error %d.\n", printk(KERN_ERR "enable_cpucache failed for %s, error %d.\n",
cachep->name, -err); cachep->name, -err);
#endif
}
static void enable_all_cpucaches (void)
{
struct list_head* p;
down(&cache_chain_sem);
p = &cache_cache.next;
do {
kmem_cache_t* cachep = list_entry(p, kmem_cache_t, next);
enable_cpucache(cachep);
p = cachep->next.next;
} while (p != &cache_cache.next);
up(&cache_chain_sem);
} }
/** /**
...@@ -1762,12 +1900,6 @@ int cache_reap (int gfp_mask) ...@@ -1762,12 +1900,6 @@ int cache_reap (int gfp_mask)
if (searchp->flags & SLAB_NO_REAP) if (searchp->flags & SLAB_NO_REAP)
goto next; goto next;
spin_lock_irq(&searchp->spinlock); spin_lock_irq(&searchp->spinlock);
if (searchp->growing)
goto next_unlock;
if (searchp->dflags & DFLGS_GROWN) {
searchp->dflags &= ~DFLGS_GROWN;
goto next_unlock;
}
{ {
cpucache_t *cc = cc_data(searchp); cpucache_t *cc = cc_data(searchp);
if (cc && cc->avail) { if (cc && cc->avail) {
...@@ -1777,8 +1909,8 @@ int cache_reap (int gfp_mask) ...@@ -1777,8 +1909,8 @@ int cache_reap (int gfp_mask)
} }
full_free = 0; full_free = 0;
p = searchp->slabs_free.next; p = searchp->lists.slabs_free.next;
while (p != &searchp->slabs_free) { while (p != &searchp->lists.slabs_free) {
slabp = list_entry(p, slab_t, list); slabp = list_entry(p, slab_t, list);
#if DEBUG #if DEBUG
if (slabp->inuse) if (slabp->inuse)
...@@ -1808,7 +1940,6 @@ int cache_reap (int gfp_mask) ...@@ -1808,7 +1940,6 @@ int cache_reap (int gfp_mask)
goto perfect; goto perfect;
} }
} }
next_unlock:
spin_unlock_irq(&searchp->spinlock); spin_unlock_irq(&searchp->spinlock);
next: next:
searchp = list_entry(searchp->next.next,kmem_cache_t,next); searchp = list_entry(searchp->next.next,kmem_cache_t,next);
...@@ -1827,10 +1958,8 @@ int cache_reap (int gfp_mask) ...@@ -1827,10 +1958,8 @@ int cache_reap (int gfp_mask)
for (scan = 0; scan < best_len; scan++) { for (scan = 0; scan < best_len; scan++) {
struct list_head *p; struct list_head *p;
if (best_cachep->growing) p = best_cachep->lists.slabs_free.prev;
break; if (p == &best_cachep->lists.slabs_free)
p = best_cachep->slabs_free.prev;
if (p == &best_cachep->slabs_free)
break; break;
slabp = list_entry(p,slab_t,list); slabp = list_entry(p,slab_t,list);
#if DEBUG #if DEBUG
...@@ -1913,23 +2042,24 @@ static int s_show(struct seq_file *m, void *p) ...@@ -1913,23 +2042,24 @@ static int s_show(struct seq_file *m, void *p)
return 0; return 0;
} }
check_irq_on();
spin_lock_irq(&cachep->spinlock); spin_lock_irq(&cachep->spinlock);
active_objs = 0; active_objs = 0;
num_slabs = 0; num_slabs = 0;
list_for_each(q,&cachep->slabs_full) { list_for_each(q,&cachep->lists.slabs_full) {
slabp = list_entry(q, slab_t, list); slabp = list_entry(q, slab_t, list);
if (slabp->inuse != cachep->num) if (slabp->inuse != cachep->num)
BUG(); BUG();
active_objs += cachep->num; active_objs += cachep->num;
active_slabs++; active_slabs++;
} }
list_for_each(q,&cachep->slabs_partial) { list_for_each(q,&cachep->lists.slabs_partial) {
slabp = list_entry(q, slab_t, list); slabp = list_entry(q, slab_t, list);
BUG_ON(slabp->inuse == cachep->num || !slabp->inuse); BUG_ON(slabp->inuse == cachep->num || !slabp->inuse);
active_objs += slabp->inuse; active_objs += slabp->inuse;
active_slabs++; active_slabs++;
} }
list_for_each(q,&cachep->slabs_free) { list_for_each(q,&cachep->lists.slabs_free) {
slabp = list_entry(q, slab_t, list); slabp = list_entry(q, slab_t, list);
if (slabp->inuse) if (slabp->inuse)
BUG(); BUG();
...@@ -2050,7 +2180,13 @@ ssize_t slabinfo_write(struct file *file, const char *buffer, ...@@ -2050,7 +2180,13 @@ ssize_t slabinfo_write(struct file *file, const char *buffer,
kmem_cache_t *cachep = list_entry(p, kmem_cache_t, next); kmem_cache_t *cachep = list_entry(p, kmem_cache_t, next);
if (!strcmp(cachep->name, kbuf)) { if (!strcmp(cachep->name, kbuf)) {
res = tune_cpucache(cachep, limit, batchcount); if (limit < 1 ||
batchcount < 1 ||
batchcount > limit) {
res = -EINVAL;
} else {
res = do_tune_cpucache(cachep, limit, batchcount);
}
break; break;
} }
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment