Commit 19942822 authored by Johannes Weiner's avatar Johannes Weiner Committed by Linus Torvalds

memcg: prevent endless loop when charging huge pages to near-limit group

If reclaim after a failed charging was unsuccessful, the limits are
checked again, just in case they settled by means of other tasks.

This is all fine as long as every charge is of size PAGE_SIZE, because in
that case, being below the limit means having at least PAGE_SIZE bytes
available.

But with transparent huge pages, we may end up in an endless loop where
charging and reclaim fail, but we keep going because the limits are not
yet exceeded, although not allowing for a huge page.

Fix this up by explicitely checking for enough room, not just whether we
are within limits.
Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
Acked-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Reviewed-by: default avatarMinchan Kim <minchan.kim@gmail.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 9221edb7
......@@ -182,6 +182,26 @@ static inline bool res_counter_check_under_limit(struct res_counter *cnt)
return ret;
}
/**
* res_counter_check_margin - check if the counter allows charging
* @cnt: the resource counter to check
* @bytes: the number of bytes to check the remaining space against
*
* Returns a boolean value on whether the counter can be charged
* @bytes or whether this would exceed the limit.
*/
static inline bool res_counter_check_margin(struct res_counter *cnt,
unsigned long bytes)
{
bool ret;
unsigned long flags;
spin_lock_irqsave(&cnt->lock, flags);
ret = cnt->limit - cnt->usage >= bytes;
spin_unlock_irqrestore(&cnt->lock, flags);
return ret;
}
static inline bool res_counter_check_under_soft_limit(struct res_counter *cnt)
{
bool ret;
......
......@@ -1111,6 +1111,23 @@ static bool mem_cgroup_check_under_limit(struct mem_cgroup *mem)
return false;
}
/**
* mem_cgroup_check_margin - check if the memory cgroup allows charging
* @mem: memory cgroup to check
* @bytes: the number of bytes the caller intends to charge
*
* Returns a boolean value on whether @mem can be charged @bytes or
* whether this would exceed the limit.
*/
static bool mem_cgroup_check_margin(struct mem_cgroup *mem, unsigned long bytes)
{
if (!res_counter_check_margin(&mem->res, bytes))
return false;
if (do_swap_account && !res_counter_check_margin(&mem->memsw, bytes))
return false;
return true;
}
static unsigned int get_swappiness(struct mem_cgroup *memcg)
{
struct cgroup *cgrp = memcg->css.cgroup;
......@@ -1853,14 +1870,18 @@ static int __mem_cgroup_do_charge(struct mem_cgroup *mem, gfp_t gfp_mask,
ret = mem_cgroup_hierarchical_reclaim(mem_over_limit, NULL,
gfp_mask, flags);
if (mem_cgroup_check_margin(mem_over_limit, csize))
return CHARGE_RETRY;
/*
* try_to_free_mem_cgroup_pages() might not give us a full
* picture of reclaim. Some pages are reclaimed and might be
* moved to swap cache or just unmapped from the cgroup.
* Check the limit again to see if the reclaim reduced the
* current usage of the cgroup before giving up
* Even though the limit is exceeded at this point, reclaim
* may have been able to free some pages. Retry the charge
* before killing the task.
*
* Only for regular pages, though: huge pages are rather
* unlikely to succeed so close to the limit, and we fall back
* to regular pages anyway in case of failure.
*/
if (ret || mem_cgroup_check_under_limit(mem_over_limit))
if (csize == PAGE_SIZE && ret)
return CHARGE_RETRY;
/*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment