Commit 21d02f8f authored by Mel Gorman's avatar Mel Gorman Committed by Linus Torvalds

mm/page_alloc: move free_the_page

Patch series "Allow high order pages to be stored on PCP", v2.

The per-cpu page allocator (PCP) only handles order-0 pages.  With the
series "Use local_lock for pcp protection and reduce stat overhead" and
"Calculate pcp->high based on zone sizes and active CPUs", it's now
feasible to store high-order pages on PCP lists.

This small series allows PCP to store "cheap" orders where cheap is
determined by PAGE_ALLOC_COSTLY_ORDER and THP-sized allocations.

This patch (of 2):

In the next page, free_compount_page is going to use the common helper
free_the_page.  This patch moves the definition to ease review.  No
functional change.

Link: https://lkml.kernel.org/r/20210603142220.10851-1-mgorman@techsingularity.net
Link: https://lkml.kernel.org/r/20210603142220.10851-2-mgorman@techsingularity.netSigned-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent f7ec1044
...@@ -687,6 +687,14 @@ static void bad_page(struct page *page, const char *reason) ...@@ -687,6 +687,14 @@ static void bad_page(struct page *page, const char *reason)
add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
} }
static inline void free_the_page(struct page *page, unsigned int order)
{
if (order == 0) /* Via pcp? */
free_unref_page(page);
else
__free_pages_ok(page, order, FPI_NONE);
}
/* /*
* Higher-order pages are called "compound pages". They are structured thusly: * Higher-order pages are called "compound pages". They are structured thusly:
* *
...@@ -5349,14 +5357,6 @@ unsigned long get_zeroed_page(gfp_t gfp_mask) ...@@ -5349,14 +5357,6 @@ unsigned long get_zeroed_page(gfp_t gfp_mask)
} }
EXPORT_SYMBOL(get_zeroed_page); EXPORT_SYMBOL(get_zeroed_page);
static inline void free_the_page(struct page *page, unsigned int order)
{
if (order == 0) /* Via pcp? */
free_unref_page(page);
else
__free_pages_ok(page, order, FPI_NONE);
}
/** /**
* __free_pages - Free pages allocated with alloc_pages(). * __free_pages - Free pages allocated with alloc_pages().
* @page: The page pointer returned from alloc_pages(). * @page: The page pointer returned from alloc_pages().
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment