Commit 2e83ee1d authored by Peter Xu's avatar Peter Xu Committed by Linus Torvalds

mm: thp: fix flags for pmd migration when split

When splitting a huge migrating PMD, we'll transfer all the existing PMD
bits and apply them again onto the small PTEs.  However we are fetching
the bits unconditionally via pmd_soft_dirty(), pmd_write() or
pmd_yound() while actually they don't make sense at all when it's a
migration entry.  Fix them up.  Since at it, drop the ifdef together as
not needed.

Note that if my understanding is correct about the problem then if
without the patch there is chance to lose some of the dirty bits in the
migrating pmd pages (on x86_64 we're fetching bit 11 which is part of
swap offset instead of bit 2) and it could potentially corrupt the
memory of an userspace program which depends on the dirty bit.

Link: http://lkml.kernel.org/r/20181213051510.20306-1-peterx@redhat.comSigned-off-by: default avatarPeter Xu <peterx@redhat.com>
Reviewed-by: default avatarKonstantin Khlebnikov <khlebnikov@yandex-team.ru>
Reviewed-by: default avatarWilliam Kucharski <william.kucharski@oracle.com>
Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: Souptick Joarder <jrdr.linux@gmail.com>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Cc: Zi Yan <zi.yan@cs.rutgers.edu>
Cc: <stable@vger.kernel.org>	[4.14+]
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 2830bf6f
...@@ -2144,23 +2144,25 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, ...@@ -2144,23 +2144,25 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
*/ */
old_pmd = pmdp_invalidate(vma, haddr, pmd); old_pmd = pmdp_invalidate(vma, haddr, pmd);
#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
pmd_migration = is_pmd_migration_entry(old_pmd); pmd_migration = is_pmd_migration_entry(old_pmd);
if (pmd_migration) { if (unlikely(pmd_migration)) {
swp_entry_t entry; swp_entry_t entry;
entry = pmd_to_swp_entry(old_pmd); entry = pmd_to_swp_entry(old_pmd);
page = pfn_to_page(swp_offset(entry)); page = pfn_to_page(swp_offset(entry));
} else write = is_write_migration_entry(entry);
#endif young = false;
soft_dirty = pmd_swp_soft_dirty(old_pmd);
} else {
page = pmd_page(old_pmd); page = pmd_page(old_pmd);
VM_BUG_ON_PAGE(!page_count(page), page);
page_ref_add(page, HPAGE_PMD_NR - 1);
if (pmd_dirty(old_pmd)) if (pmd_dirty(old_pmd))
SetPageDirty(page); SetPageDirty(page);
write = pmd_write(old_pmd); write = pmd_write(old_pmd);
young = pmd_young(old_pmd); young = pmd_young(old_pmd);
soft_dirty = pmd_soft_dirty(old_pmd); soft_dirty = pmd_soft_dirty(old_pmd);
}
VM_BUG_ON_PAGE(!page_count(page), page);
page_ref_add(page, HPAGE_PMD_NR - 1);
/* /*
* Withdraw the table only after we mark the pmd entry invalid. * Withdraw the table only after we mark the pmd entry invalid.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment