Commit 464c7ffb authored by Aneesh Kumar K.V's avatar Aneesh Kumar K.V Committed by Linus Torvalds

mm/hugetlb: filter out hugetlb pages if HUGEPAGE migration is not supported.

When scanning for movable pages, filter out Hugetlb pages if hugepage
migration is not supported.  Without this we hit infinte loop in
__offline_pages() where we do

	pfn = scan_movable_pages(start_pfn, end_pfn);
	if (pfn) { /* We have movable pages */
		ret = do_migrate_range(pfn, end_pfn);
		goto repeat;
	}

Fix this by checking hugepage_migration_supported both in
has_unmovable_pages which is the primary backoff mechanism for page
offlining and for consistency reasons also into scan_movable_pages
because it doesn't make any sense to return a pfn to non-migrateable
huge page.

This issue was revealed by, but not caused by 72b39cfc ("mm,
memory_hotplug: do not fail offlining too early").

Link: http://lkml.kernel.org/r/20180824063314.21981-1-aneesh.kumar@linux.ibm.com
Fixes: 72b39cfc ("mm, memory_hotplug: do not fail offlining too early")
Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Reported-by: default avatarHaren Myneni <haren@linux.vnet.ibm.com>
Acked-by: default avatarMichal Hocko <mhocko@suse.com>
Reviewed-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 04b8e946
...@@ -1333,7 +1333,8 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end) ...@@ -1333,7 +1333,8 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
if (__PageMovable(page)) if (__PageMovable(page))
return pfn; return pfn;
if (PageHuge(page)) { if (PageHuge(page)) {
if (page_huge_active(page)) if (hugepage_migration_supported(page_hstate(page)) &&
page_huge_active(page))
return pfn; return pfn;
else else
pfn = round_up(pfn + 1, pfn = round_up(pfn + 1,
......
...@@ -7708,6 +7708,10 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, ...@@ -7708,6 +7708,10 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
* handle each tail page individually in migration. * handle each tail page individually in migration.
*/ */
if (PageHuge(page)) { if (PageHuge(page)) {
if (!hugepage_migration_supported(page_hstate(page)))
goto unmovable;
iter = round_up(iter + 1, 1<<compound_order(page)) - 1; iter = round_up(iter + 1, 1<<compound_order(page)) - 1;
continue; continue;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment