Commit aedb0eb1 authored by Christoph Lameter's avatar Christoph Lameter Committed by Linus Torvalds

[PATCH] Slab: Do not fallback to nodes that have not been bootstrapped yet

The zonelist may contain zones of nodes that have not been bootstrapped and
we will oops if we try to allocate from those zones.  So check if the node
information for the slab and the node have been setup before attempting an
allocation.  If it has not been setup then skip that zone.

Usually we will not encounter this situation since the slab bootstrap code
avoids falling back before we have setup the respective nodes but we seem
to have a special needs for pppc.
Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
Acked-by: default avatarAndy Whitcroft <apw@shadowen.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Mike Kravetz <kravetz@us.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: default avatarMel Gorman <mel@csn.ul.ie>
Acked-by: default avatarWill Schmidt <will_schmidt@vnet.ibm.com>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 75167957
...@@ -3152,12 +3152,15 @@ void *fallback_alloc(struct kmem_cache *cache, gfp_t flags) ...@@ -3152,12 +3152,15 @@ void *fallback_alloc(struct kmem_cache *cache, gfp_t flags)
struct zone **z; struct zone **z;
void *obj = NULL; void *obj = NULL;
for (z = zonelist->zones; *z && !obj; z++) for (z = zonelist->zones; *z && !obj; z++) {
int nid = zone_to_nid(*z);
if (zone_idx(*z) <= ZONE_NORMAL && if (zone_idx(*z) <= ZONE_NORMAL &&
cpuset_zone_allowed(*z, flags)) cpuset_zone_allowed(*z, flags) &&
cache->nodelists[nid])
obj = __cache_alloc_node(cache, obj = __cache_alloc_node(cache,
flags | __GFP_THISNODE, flags | __GFP_THISNODE, nid);
zone_to_nid(*z)); }
return obj; return obj;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment