Commit 58f595c6 authored by David Hildenbrand's avatar David Hildenbrand Committed by Andrew Morton

mm/ksm: simplify break_ksm() to not rely on VM_FAULT_WRITE

Now that GUP no longer requires VM_FAULT_WRITE, break_ksm() is the sole
remaining user of VM_FAULT_WRITE.  As we also want to stop triggering a
fake write fault and instead use FAULT_FLAG_UNSHARE -- similar to
GUP-triggered unsharing when taking a R/O pin on a shared anonymous page
(including KSM pages), let's stop relying on VM_FAULT_WRITE.

Let's rework break_ksm() to not rely on the return value of
handle_mm_fault() anymore to figure out whether COW-breaking was
successful.  Simply perform another follow_page() lookup to verify the
result.

While this makes break_ksm() slightly less efficient, we can simplify
handle_mm_fault() a little and easily switch to FAULT_FLAG_UNSHARE without
introducing similar KSM-specific behavior for FAULT_FLAG_UNSHARE.

In my setup (AMD Ryzen 9 3900X), running the KSM selftest to test unmerge
performance on 2 GiB (taskset 0x8 ./ksm_tests -D -s 2048), this results in
a performance degradation of ~4% -- 5% (old: ~5250 MiB/s, new: ~5010
MiB/s).

I don't think that we particularly care about that performance drop when
unmerging.  If it ever turns out to be an actual performance issue, we can
think about a better alternative for FAULT_FLAG_UNSHARE -- let's just keep
it simple for now.

Link: https://lkml.kernel.org/r/20221021101141.84170-3-david@redhat.comSigned-off-by: default avatarDavid Hildenbrand <david@redhat.com>
Acked-by: default avatarPeter Xu <peterx@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 5036880e
...@@ -440,26 +440,27 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr) ...@@ -440,26 +440,27 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
vm_fault_t ret = 0; vm_fault_t ret = 0;
do { do {
bool ksm_page = false;
cond_resched(); cond_resched();
page = follow_page(vma, addr, page = follow_page(vma, addr,
FOLL_GET | FOLL_MIGRATION | FOLL_REMOTE); FOLL_GET | FOLL_MIGRATION | FOLL_REMOTE);
if (IS_ERR_OR_NULL(page)) if (IS_ERR_OR_NULL(page))
break; break;
if (PageKsm(page)) if (PageKsm(page))
ret = handle_mm_fault(vma, addr, ksm_page = true;
FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE,
NULL);
else
ret = VM_FAULT_WRITE;
put_page(page); put_page(page);
} while (!(ret & (VM_FAULT_WRITE | VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | VM_FAULT_OOM)));
if (!ksm_page)
return 0;
ret = handle_mm_fault(vma, addr,
FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE,
NULL);
} while (!(ret & (VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | VM_FAULT_OOM)));
/* /*
* We must loop because handle_mm_fault() may back out if there's * We must loop until we no longer find a KSM page because
* any difficulty e.g. if pte accessed bit gets updated concurrently. * handle_mm_fault() may back out if there's any difficulty e.g. if
* * pte accessed bit gets updated concurrently.
* VM_FAULT_WRITE is what we have been hoping for: it indicates that
* COW has been broken, even if the vma does not permit VM_WRITE;
* but note that a concurrent fault might break PageKsm for us.
* *
* VM_FAULT_SIGBUS could occur if we race with truncation of the * VM_FAULT_SIGBUS could occur if we race with truncation of the
* backing file, which also invalidates anonymous pages: that's * backing file, which also invalidates anonymous pages: that's
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment