Commit 8bcb74de authored by Naoya Horiguchi's avatar Naoya Horiguchi Committed by Linus Torvalds

mm: hwpoison: call shake_page() unconditionally

shake_page() is called before going into core error handling code in
order to ensure that the error page is flushed from lru_cache lists
where pages stay during transferring among LRU lists.

But currently it's not fully functional because when the page is linked
to lru_cache by calling activate_page(), its PageLRU flag is set and
shake_page() is skipped.  The result is to fail error handling with
"still referenced by 1 users" message.

When the page is linked to lru_cache by isolate_lru_page(), its PageLRU
is clear, so that's fine.

This patch makes shake_page() unconditionally called to avoild the
failure.

Fixes: 23a003bf ("mm/madvise: pass return code of memory_failure() to userspace")
Link: http://lkml.kernel.org/r/20170417055948.GM31394@yexl-desktop
Link: http://lkml.kernel.org/r/1493197841-23986-2-git-send-email-n-horiguchi@ah.jp.nec.comSigned-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Reported-by: default avatarkernel test robot <lkp@intel.com>
Cc: Xiaolong Ye <xiaolong.ye@intel.com>
Cc: Chen Gong <gong.chen@linux.intel.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 0ccfece6
...@@ -34,8 +34,7 @@ static int hwpoison_inject(void *data, u64 val) ...@@ -34,8 +34,7 @@ static int hwpoison_inject(void *data, u64 val)
if (!hwpoison_filter_enable) if (!hwpoison_filter_enable)
goto inject; goto inject;
if (!PageLRU(hpage) && !PageHuge(p)) shake_page(hpage, 0);
shake_page(hpage, 0);
/* /*
* This implies unable to support non-LRU pages. * This implies unable to support non-LRU pages.
*/ */
......
...@@ -220,6 +220,9 @@ static int kill_proc(struct task_struct *t, unsigned long addr, int trapno, ...@@ -220,6 +220,9 @@ static int kill_proc(struct task_struct *t, unsigned long addr, int trapno,
*/ */
void shake_page(struct page *p, int access) void shake_page(struct page *p, int access)
{ {
if (PageHuge(p))
return;
if (!PageSlab(p)) { if (!PageSlab(p)) {
lru_add_drain_all(); lru_add_drain_all();
if (PageLRU(p)) if (PageLRU(p))
...@@ -1137,22 +1140,14 @@ int memory_failure(unsigned long pfn, int trapno, int flags) ...@@ -1137,22 +1140,14 @@ int memory_failure(unsigned long pfn, int trapno, int flags)
* The check (unnecessarily) ignores LRU pages being isolated and * The check (unnecessarily) ignores LRU pages being isolated and
* walked by the page reclaim code, however that's not a big loss. * walked by the page reclaim code, however that's not a big loss.
*/ */
if (!PageHuge(p)) { shake_page(p, 0);
if (!PageLRU(p)) /* shake_page could have turned it free. */
shake_page(p, 0); if (!PageLRU(p) && is_free_buddy_page(p)) {
if (!PageLRU(p)) { if (flags & MF_COUNT_INCREASED)
/* action_result(pfn, MF_MSG_BUDDY, MF_DELAYED);
* shake_page could have turned it free. else
*/ action_result(pfn, MF_MSG_BUDDY_2ND, MF_DELAYED);
if (is_free_buddy_page(p)) { return 0;
if (flags & MF_COUNT_INCREASED)
action_result(pfn, MF_MSG_BUDDY, MF_DELAYED);
else
action_result(pfn, MF_MSG_BUDDY_2ND,
MF_DELAYED);
return 0;
}
}
} }
lock_page(hpage); lock_page(hpage);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment