Commit a4523a8b authored by Roland Dreier's avatar Roland Dreier Committed by Linus Torvalds

[PATCH] slab: Fix kmem_cache_destroy() on NUMA

With CONFIG_NUMA set, kmem_cache_destroy() may fail and say "Can't
free all objects."  The problem is caused by sequences such as the
following (suppose we are on a NUMA machine with two nodes, 0 and 1):

 * Allocate an object from cache on node 0.
 * Free the object on node 1.  The object is put into node 1's alien
   array_cache for node 0.
 * Call kmem_cache_destroy(), which ultimately ends up in __cache_shrink().
 * __cache_shrink() does drain_cpu_caches(), which loops through all nodes.
   For each node it drains the shared array_cache and then handles the
   alien array_cache for the other node.

However this means that node 0's shared array_cache will be drained,
and then node 1 will move the contents of its alien[0] array_cache
into that same shared array_cache.  node 0's shared array_cache is
never looked at again, so the objects left there will appear to be in
use when __cache_shrink() calls __node_shrink() for node 0.  So
__node_shrink() will return 1 and kmem_cache_destroy() will fail.

This patch fixes this by having drain_cpu_caches() do
drain_alien_cache() on every node before it does drain_array() on the
nodes' shared array_caches.

The problem was originally reported by Or Gerlitz <ogerlitz@voltaire.com>.
Signed-off-by: default avatarRoland Dreier <rolandd@cisco.com>
Acked-by: default avatarChristoph Lameter <clameter@sgi.com>
Acked-by: default avatarPekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 40e59a61
......@@ -2200,11 +2200,14 @@ static void drain_cpu_caches(struct kmem_cache *cachep)
check_irq_on();
for_each_online_node(node) {
l3 = cachep->nodelists[node];
if (l3) {
drain_array(cachep, l3, l3->shared, 1, node);
if (l3->alien)
if (l3 && l3->alien)
drain_alien_cache(cachep, l3->alien);
}
for_each_online_node(node) {
l3 = cachep->nodelists[node];
if (l3)
drain_array(cachep, l3, l3->shared, 1, node);
}
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment