Commit ad7df764 authored by Alistair Popple's avatar Alistair Popple Committed by Linus Torvalds

mm/rmap: fixup copying of soft dirty and uffd ptes

During memory migration a pte is temporarily replaced with a migration
swap pte.  Some pte bits from the existing mapping such as the soft-dirty
and uffd write-protect bits are preserved by copying these to the
temporary migration swap pte.

However these bits are not stored at the same location for swap and
non-swap ptes.  Therefore testing these bits requires using the
appropriate helper function for the given pte type.

Unfortunately several code locations were found where the wrong helper
function is being used to test soft_dirty and uffd_wp bits which leads to
them getting incorrectly set or cleared during page-migration.

Fix these by using the correct tests based on pte type.

Fixes: a5430dda ("mm/migrate: support un-addressable ZONE_DEVICE page in migration")
Fixes: 8c3328f1 ("mm/migrate: migrate_vma() unmap page from vma while collecting pages")
Fixes: f45ec5ff ("userfaultfd: wp: support swap and page migration")
Signed-off-by: default avatarAlistair Popple <alistair@popple.id.au>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
Cc: Jérôme Glisse <jglisse@redhat.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Alistair Popple <alistair@popple.id.au>
Cc: <stable@vger.kernel.org>
Link: https://lkml.kernel.org/r/20200825064232.10023-2-alistair@popple.id.auSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent ebdf8321
...@@ -2427,10 +2427,17 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, ...@@ -2427,10 +2427,17 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
entry = make_migration_entry(page, mpfn & entry = make_migration_entry(page, mpfn &
MIGRATE_PFN_WRITE); MIGRATE_PFN_WRITE);
swp_pte = swp_entry_to_pte(entry); swp_pte = swp_entry_to_pte(entry);
if (pte_soft_dirty(pte)) if (pte_present(pte)) {
swp_pte = pte_swp_mksoft_dirty(swp_pte); if (pte_soft_dirty(pte))
if (pte_uffd_wp(pte)) swp_pte = pte_swp_mksoft_dirty(swp_pte);
swp_pte = pte_swp_mkuffd_wp(swp_pte); if (pte_uffd_wp(pte))
swp_pte = pte_swp_mkuffd_wp(swp_pte);
} else {
if (pte_swp_soft_dirty(pte))
swp_pte = pte_swp_mksoft_dirty(swp_pte);
if (pte_swp_uffd_wp(pte))
swp_pte = pte_swp_mkuffd_wp(swp_pte);
}
set_pte_at(mm, addr, ptep, swp_pte); set_pte_at(mm, addr, ptep, swp_pte);
/* /*
......
...@@ -1511,9 +1511,14 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, ...@@ -1511,9 +1511,14 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
*/ */
entry = make_migration_entry(page, 0); entry = make_migration_entry(page, 0);
swp_pte = swp_entry_to_pte(entry); swp_pte = swp_entry_to_pte(entry);
if (pte_soft_dirty(pteval))
/*
* pteval maps a zone device page and is therefore
* a swap pte.
*/
if (pte_swp_soft_dirty(pteval))
swp_pte = pte_swp_mksoft_dirty(swp_pte); swp_pte = pte_swp_mksoft_dirty(swp_pte);
if (pte_uffd_wp(pteval)) if (pte_swp_uffd_wp(pteval))
swp_pte = pte_swp_mkuffd_wp(swp_pte); swp_pte = pte_swp_mkuffd_wp(swp_pte);
set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte); set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte);
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment