Commit 93585352 authored by Oleg Nesterov's avatar Oleg Nesterov Committed by Andrea Arcangeli

mm: fix the theoretical compound_lock() vs prep_new_page() race

get/put_page(thp_tail) paths do get_page_unless_zero(page_head) +
compound_lock(). In theory this page_head can be already freed and
reallocated as alloc_pages(__GFP_COMP, smaller_order). In this case
get_page_unless_zero() can succeed right after set_page_refcounted(),
and compound_lock() can race with the non-atomic __SetPageHead().

Perhaps we should rework the thp locking (under discussion), but
until then this patch moves set_page_refcounted() and adds wmb()
to ensure that page->_count != 0 comes as a last change.

I am not sure about other callers of set_page_refcounted(), but at
first glance they look fine to me.
Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
parent fa175d10
......@@ -968,8 +968,6 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
}
set_page_private(page, 0);
set_page_refcounted(page);
arch_alloc_page(page, order);
kernel_map_pages(page, 1 << order, 1);
kasan_alloc_pages(page, order);
......@@ -980,6 +978,14 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
if (order && (gfp_flags & __GFP_COMP))
prep_compound_page(page, order);
/*
* Make sure the caller of get_page_unless_zero() will see the
* fully initialized page. Say, to ensure that compound_lock()
* can't race with the non-atomic __SetPage*() above.
*/
smp_wmb();
set_page_refcounted(page);
set_page_owner(page, order, gfp_flags);
/*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment