Commit b26e517a authored by Feng Tang's avatar Feng Tang Committed by Linus Torvalds

mm/mempolicy: cleanup nodemask intersection check for oom

Patch series "mm/mempolicy: some fix and semantics cleanup", v4.

Current memory policy code has some confusing and ambiguous part about
MPOL_LOCAL policy, as it is handled as a faked MPOL_PREFERRED one, and
there are many places having to distinguish them.  Also the nodemask
intersection check needs cleanup to be more explicit for OOM use, and
handle MPOL_INTERLEAVE correctly.  This patchset cleans up these and
unifies the parameter sanity check for mbind() and set_mempolicy().

This patch (of 3):

mempolicy_nodemask_intersects seem to be a general purpose mempolicy
function.  In fact it is partially tailored for the OOM purpose
instead.  The oom proper is the only existing user so rename the
function to make that purpose explicit.

While at it drop the MPOL_INTERLEAVE as those allocations never has a
nodemask defined (see alloc_page_interleave) so this is a dead code and
a confusing one because MPOL_INTERLEAVE is a hint rather than a hard
requirement so it shouldn't be considered during the OOM.

The final code can be reduced to a check for MPOL_BIND which is the
only memory policy that is a hard requirement and thus relevant to a
constrained OOM logic.

[mhocko@suse.com: changelog edits]

Link: https://lkml.kernel.org/r/1622560492-1294-1-git-send-email-feng.tang@intel.com
Link: https://lkml.kernel.org/r/1622560492-1294-2-git-send-email-feng.tang@intel.com
Link: https://lkml.kernel.org/r/1622469956-82897-1-git-send-email-feng.tang@intel.com
Link: https://lkml.kernel.org/r/1622469956-82897-2-git-send-email-feng.tang@intel.comSigned-off-by: default avatarFeng Tang <feng.tang@intel.com>
Suggested-by: default avatarMichal Hocko <mhocko@suse.com>
Acked-by: default avatarMichal Hocko <mhocko@suse.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Ben Widawsky <ben.widawsky@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent b55ca526
...@@ -150,7 +150,7 @@ extern int huge_node(struct vm_area_struct *vma, ...@@ -150,7 +150,7 @@ extern int huge_node(struct vm_area_struct *vma,
unsigned long addr, gfp_t gfp_flags, unsigned long addr, gfp_t gfp_flags,
struct mempolicy **mpol, nodemask_t **nodemask); struct mempolicy **mpol, nodemask_t **nodemask);
extern bool init_nodemask_of_mempolicy(nodemask_t *mask); extern bool init_nodemask_of_mempolicy(nodemask_t *mask);
extern bool mempolicy_nodemask_intersects(struct task_struct *tsk, extern bool mempolicy_in_oom_domain(struct task_struct *tsk,
const nodemask_t *mask); const nodemask_t *mask);
extern nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy); extern nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy);
......
...@@ -2094,16 +2094,16 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask) ...@@ -2094,16 +2094,16 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask)
#endif #endif
/* /*
* mempolicy_nodemask_intersects * mempolicy_in_oom_domain
* *
* If tsk's mempolicy is "default" [NULL], return 'true' to indicate default * If tsk's mempolicy is "bind", check for intersection between mask and
* policy. Otherwise, check for intersection between mask and the policy * the policy nodemask. Otherwise, return true for all other policies
* nodemask for 'bind' or 'interleave' policy. For 'preferred' or 'local' * including "interleave", as a tsk with "interleave" policy may have
* policy, always return true since it may allocate elsewhere on fallback. * memory allocated from all nodes in system.
* *
* Takes task_lock(tsk) to prevent freeing of its mempolicy. * Takes task_lock(tsk) to prevent freeing of its mempolicy.
*/ */
bool mempolicy_nodemask_intersects(struct task_struct *tsk, bool mempolicy_in_oom_domain(struct task_struct *tsk,
const nodemask_t *mask) const nodemask_t *mask)
{ {
struct mempolicy *mempolicy; struct mempolicy *mempolicy;
...@@ -2111,29 +2111,13 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk, ...@@ -2111,29 +2111,13 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk,
if (!mask) if (!mask)
return ret; return ret;
task_lock(tsk); task_lock(tsk);
mempolicy = tsk->mempolicy; mempolicy = tsk->mempolicy;
if (!mempolicy) if (mempolicy && mempolicy->mode == MPOL_BIND)
goto out;
switch (mempolicy->mode) {
case MPOL_PREFERRED:
/*
* MPOL_PREFERRED and MPOL_F_LOCAL are only preferred nodes to
* allocate from, they may fallback to other nodes when oom.
* Thus, it's possible for tsk to have allocated memory from
* nodes in mask.
*/
break;
case MPOL_BIND:
case MPOL_INTERLEAVE:
ret = nodes_intersects(mempolicy->v.nodes, *mask); ret = nodes_intersects(mempolicy->v.nodes, *mask);
break;
default:
BUG();
}
out:
task_unlock(tsk); task_unlock(tsk);
return ret; return ret;
} }
......
...@@ -104,7 +104,7 @@ static bool oom_cpuset_eligible(struct task_struct *start, ...@@ -104,7 +104,7 @@ static bool oom_cpuset_eligible(struct task_struct *start,
* mempolicy intersects current, otherwise it may be * mempolicy intersects current, otherwise it may be
* needlessly killed. * needlessly killed.
*/ */
ret = mempolicy_nodemask_intersects(tsk, mask); ret = mempolicy_in_oom_domain(tsk, mask);
} else { } else {
/* /*
* This is not a mempolicy constrained oom, so only * This is not a mempolicy constrained oom, so only
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment