Commit 0bc35a97 authored by Vlastimil Babka's avatar Vlastimil Babka Committed by Linus Torvalds

mm: unify checks in alloc_pages_node() and __alloc_pages_node()

Perform the same debug checks in alloc_pages_node() as are done in
__alloc_pages_node(), by making the former function a wrapper of the
latter one.

In addition to better diagnostics in DEBUG_VM builds for situations
which have been already fatal (e.g.  out-of-bounds node id), there are
two visible changes for potential existing buggy callers of
alloc_pages_node():

- calling alloc_pages_node() with any negative nid (e.g. due to arithmetic
  overflow) was treated as passing NUMA_NO_NODE and fallback to local node was
  applied. This will now be fatal.
- calling alloc_pages_node() with an offline node will now be checked for
  DEBUG_VM builds. Since it's not fatal if the node has been previously online,
  and this patch may expose some existing buggy callers, change the VM_BUG_ON
  in __alloc_pages_node() to VM_WARN_ON.
Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
Acked-by: default avatarDavid Rientjes <rientjes@google.com>
Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
Acked-by: default avatarChristoph Lameter <cl@linux.com>
Acked-by: default avatarMichal Hocko <mhocko@suse.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 96db800f
...@@ -310,23 +310,23 @@ __alloc_pages(gfp_t gfp_mask, unsigned int order, ...@@ -310,23 +310,23 @@ __alloc_pages(gfp_t gfp_mask, unsigned int order,
static inline struct page * static inline struct page *
__alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) __alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order)
{ {
VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES || !node_online(nid)); VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES);
VM_WARN_ON(!node_online(nid));
return __alloc_pages(gfp_mask, order, node_zonelist(nid, gfp_mask)); return __alloc_pages(gfp_mask, order, node_zonelist(nid, gfp_mask));
} }
/* /*
* Allocate pages, preferring the node given as nid. When nid == NUMA_NO_NODE, * Allocate pages, preferring the node given as nid. When nid == NUMA_NO_NODE,
* prefer the current CPU's node. * prefer the current CPU's node. Otherwise node must be valid and online.
*/ */
static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask, static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask,
unsigned int order) unsigned int order)
{ {
/* Unknown node is current node */ if (nid == NUMA_NO_NODE)
if (nid < 0)
nid = numa_node_id(); nid = numa_node_id();
return __alloc_pages(gfp_mask, order, node_zonelist(nid, gfp_mask)); return __alloc_pages_node(nid, gfp_mask, order);
} }
#ifdef CONFIG_NUMA #ifdef CONFIG_NUMA
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment