Commit e6e9d6e2 authored by Michal Hocko's avatar Michal Hocko Committed by Greg Kroah-Hartman

mm: handle lru_add_drain_all for UP properly

[ Upstream commit 6ea183d6 ]

Since for_each_cpu(cpu, mask) added by commit 2d3854a3
("cpumask: introduce new API, without changing anything") did not
evaluate the mask argument if NR_CPUS == 1 due to CONFIG_SMP=n,
lru_add_drain_all() is hitting WARN_ON() at __flush_work() added by
commit 4d43d395 ("workqueue: Try to catch flush_work() without
INIT_WORK().") by unconditionally calling flush_work() [1].

Workaround this issue by using CONFIG_SMP=n specific lru_add_drain_all
implementation.  There is no real need to defer the implementation to
the workqueue as the draining is going to happen on the local cpu.  So
alias lru_add_drain_all to lru_add_drain which does all the necessary
work.

[akpm@linux-foundation.org: fix various build warnings]
[1] https://lkml.kernel.org/r/18a30387-6aa5-6123-e67c-57579ecc3f38@roeck-us.net
Link: http://lkml.kernel.org/r/20190213124334.GH4525@dhcp22.suse.czSigned-off-by: default avatarMichal Hocko <mhocko@suse.com>
Reported-by: default avatarGuenter Roeck <linux@roeck-us.net>
Debugged-by: default avatarTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
parent f3a9c9be
......@@ -321,11 +321,6 @@ static inline void activate_page_drain(int cpu)
{
}
static bool need_activate_page_drain(int cpu)
{
return false;
}
void activate_page(struct page *page)
{
struct zone *zone = page_zone(page);
......@@ -654,13 +649,15 @@ void lru_add_drain(void)
put_cpu();
}
#ifdef CONFIG_SMP
static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work);
static void lru_add_drain_per_cpu(struct work_struct *dummy)
{
lru_add_drain();
}
static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work);
/*
* Doesn't need any cpu hotplug locking because we do rely on per-cpu
* kworkers being shut down before our page_alloc_cpu_dead callback is
......@@ -703,6 +700,12 @@ void lru_add_drain_all(void)
mutex_unlock(&lock);
}
#else
void lru_add_drain_all(void)
{
lru_add_drain();
}
#endif
/**
* release_pages - batched put_page()
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment