Commit 658be465 authored by Kefeng Wang's avatar Kefeng Wang Committed by Andrew Morton

mm: support poison recovery from copy_present_page()

Similar to other poison recovery, use copy_mc_user_highpage() to avoid
potentially kernel panic during copy page in copy_present_page() from
fork, once copy failed due to hwpoison in source page, we need to break
out of copy in copy_pte_range() and release prealloc folio, so
copy_mc_user_highpage() is moved ahead before set *prealloc to NULL.

Link: https://lkml.kernel.org/r/20240906024201.1214712-3-wangkefeng.wang@huawei.comSigned-off-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: default avatarJane Chu <jane.chu@oracle.com>
Reviewed-by: default avatarMiaohe Lin <linmiaohe@huawei.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Jiaqi Yan <jiaqiyan@google.com>
Cc: Naoya Horiguchi <nao.horiguchi@gmail.com>
Cc: Tony Luck <tony.luck@intel.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent aa549f92
...@@ -926,8 +926,11 @@ copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma ...@@ -926,8 +926,11 @@ copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
* We have a prealloc page, all good! Take it * We have a prealloc page, all good! Take it
* over and copy the page & arm it. * over and copy the page & arm it.
*/ */
if (copy_mc_user_highpage(&new_folio->page, page, addr, src_vma))
return -EHWPOISON;
*prealloc = NULL; *prealloc = NULL;
copy_user_highpage(&new_folio->page, page, addr, src_vma);
__folio_mark_uptodate(new_folio); __folio_mark_uptodate(new_folio);
folio_add_new_anon_rmap(new_folio, dst_vma, addr, RMAP_EXCLUSIVE); folio_add_new_anon_rmap(new_folio, dst_vma, addr, RMAP_EXCLUSIVE);
folio_add_lru_vma(new_folio, dst_vma); folio_add_lru_vma(new_folio, dst_vma);
...@@ -1166,8 +1169,9 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, ...@@ -1166,8 +1169,9 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
/* /*
* If we need a pre-allocated page for this pte, drop the * If we need a pre-allocated page for this pte, drop the
* locks, allocate, and try again. * locks, allocate, and try again.
* If copy failed due to hwpoison in source page, break out.
*/ */
if (unlikely(ret == -EAGAIN)) if (unlikely(ret == -EAGAIN || ret == -EHWPOISON))
break; break;
if (unlikely(prealloc)) { if (unlikely(prealloc)) {
/* /*
...@@ -1197,7 +1201,7 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, ...@@ -1197,7 +1201,7 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
goto out; goto out;
} }
entry.val = 0; entry.val = 0;
} else if (ret == -EBUSY) { } else if (ret == -EBUSY || unlikely(ret == -EHWPOISON)) {
goto out; goto out;
} else if (ret == -EAGAIN) { } else if (ret == -EAGAIN) {
prealloc = folio_prealloc(src_mm, src_vma, addr, false); prealloc = folio_prealloc(src_mm, src_vma, addr, false);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment