Commit e3fe8e55 authored by Yang, Philip's avatar Yang, Philip Committed by Jason Gunthorpe

mm/hmm: fix hmm_range_fault()'s handling of swapped out pages

hmm_range_fault() may return NULL pages because some of the pfns are equal
to HMM_PFN_NONE. This happens randomly under memory pressure. The reason
is during the swapped out page pte path, hmm_vma_handle_pte() doesn't
update the fault variable from cpu_flags, so it failed to call
hmm_vam_do_fault() to swap the page in.

The fix is to call hmm_pte_need_fault() to update fault variable.

Fixes: 74eee180 ("mm/hmm/mirror: device page fault handler")
Link: https://lore.kernel.org/r/20190815205227.7949-1-Philip.Yang@amd.comSigned-off-by: default avatarPhilip Yang <Philip.Yang@amd.com>
Reviewed-by: default avatar"Jérôme Glisse" <jglisse@redhat.com>
Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
parent c9624514
......@@ -469,6 +469,9 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
swp_entry_t entry = pte_to_swp_entry(pte);
if (!non_swap_entry(entry)) {
cpu_flags = pte_to_hmm_pfn_flags(range, pte);
hmm_pte_need_fault(hmm_vma_walk, orig_pfn, cpu_flags,
&fault, &write_fault);
if (fault || write_fault)
goto fault;
return 0;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment