Commit 4a18419f authored by Nadav Amit's avatar Nadav Amit Committed by Andrew Morton

mm/mprotect: use mmu_gather

Patch series "mm/mprotect: avoid unnecessary TLB flushes", v6.

This patchset is intended to remove unnecessary TLB flushes during
mprotect() syscalls.  Once this patch-set make it through, similar and
further optimizations for MADV_COLD and userfaultfd would be possible.

Basically, there are 3 optimizations in this patch-set:

1. Use TLB batching infrastructure to batch flushes across VMAs and do
   better/fewer flushes.  This would also be handy for later userfaultfd
   enhancements.

2. Avoid unnecessary TLB flushes.  This optimization is the one that
   provides most of the performance benefits.  Unlike previous versions,
   we now only avoid flushes that would not result in spurious
   page-faults.

3. Avoiding TLB flushes on change_huge_pmd() that are only needed to
   prevent the A/D bits from changing.

Andrew asked for some benchmark numbers.  I do not have an easy
determinate macrobenchmark in which it is easy to show benefit.  I
therefore ran a microbenchmark: a loop that does the following on
anonymous memory, just as a sanity check to see that time is saved by
avoiding TLB flushes.  The loop goes:

	mprotect(p, PAGE_SIZE, PROT_READ)
	mprotect(p, PAGE_SIZE, PROT_READ|PROT_WRITE)
	*p = 0; // make the page writable

The test was run in KVM guest with 1 or 2 threads (the second thread was
busy-looping).  I measured the time (cycles) of each operation:

		1 thread		2 threads
		mmots	+patch		mmots	+patch
PROT_READ	3494	2725 (-22%)	8630	7788 (-10%)
PROT_READ|WRITE	3952	2724 (-31%)	9075	2865 (-68%)

[ mmots = v5.17-rc6-mmots-2022-03-06-20-38 ]

The exact numbers are really meaningless, but the benefit is clear.  There
are 2 interesting results though.  

(1) PROT_READ is cheaper, while one can expect it not to be affected. 
This is presumably due to TLB miss that is saved

(2) Without memory access (*p = 0), the speedup of the patch is even
greater.  In that scenario mprotect(PROT_READ) also avoids the TLB flush. 
As a result both operations on the patched kernel take roughly ~1500
cycles (with either 1 or 2 threads), whereas on mmotm their cost is as
high as presented in the table.


This patch (of 3):

change_pXX_range() currently does not use mmu_gather, but instead
implements its own deferred TLB flushes scheme.  This both complicates the
code, as developers need to be aware of different invalidation schemes,
and prevents opportunities to avoid TLB flushes or perform them in finer
granularity.

The use of mmu_gather for modified PTEs has benefits in various scenarios
even if pages are not released.  For instance, if only a single page needs
to be flushed out of a range of many pages, only that page would be
flushed.  If a THP page is flushed, on x86 a single TLB invlpg instruction
can be used instead of 512 instructions (or a full TLB flush, which would
Linux would actually use by default).  mprotect() over multiple VMAs
requires a single flush.

Use mmu_gather in change_pXX_range().  As the pages are not released, only
record the flushed range using tlb_flush_pXX_range().

Handle THP similarly and get rid of flush_cache_range() which becomes
redundant since tlb_start_vma() calls it when needed.

Link: https://lkml.kernel.org/r/20220401180821.1986781-1-namit@vmware.com
Link: https://lkml.kernel.org/r/20220401180821.1986781-2-namit@vmware.comSigned-off-by: default avatarNadav Amit <namit@vmware.com>
Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Cc: Yu Zhao <yuzhao@google.com>
Cc: Nick Piggin <npiggin@gmail.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent a2ad63da
...@@ -758,6 +758,7 @@ int setup_arg_pages(struct linux_binprm *bprm, ...@@ -758,6 +758,7 @@ int setup_arg_pages(struct linux_binprm *bprm,
unsigned long stack_size; unsigned long stack_size;
unsigned long stack_expand; unsigned long stack_expand;
unsigned long rlim_stack; unsigned long rlim_stack;
struct mmu_gather tlb;
#ifdef CONFIG_STACK_GROWSUP #ifdef CONFIG_STACK_GROWSUP
/* Limit stack size */ /* Limit stack size */
...@@ -812,8 +813,11 @@ int setup_arg_pages(struct linux_binprm *bprm, ...@@ -812,8 +813,11 @@ int setup_arg_pages(struct linux_binprm *bprm,
vm_flags |= mm->def_flags; vm_flags |= mm->def_flags;
vm_flags |= VM_STACK_INCOMPLETE_SETUP; vm_flags |= VM_STACK_INCOMPLETE_SETUP;
ret = mprotect_fixup(vma, &prev, vma->vm_start, vma->vm_end, tlb_gather_mmu(&tlb, mm);
ret = mprotect_fixup(&tlb, vma, &prev, vma->vm_start, vma->vm_end,
vm_flags); vm_flags);
tlb_finish_mmu(&tlb);
if (ret) if (ret)
goto out_unlock; goto out_unlock;
BUG_ON(prev != vma); BUG_ON(prev != vma);
......
...@@ -36,8 +36,9 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, pud_t *pud, ...@@ -36,8 +36,9 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, pud_t *pud,
unsigned long addr); unsigned long addr);
bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd); unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd);
int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
pgprot_t newprot, unsigned long cp_flags); pmd_t *pmd, unsigned long addr, pgprot_t newprot,
unsigned long cp_flags);
vm_fault_t vmf_insert_pfn_pmd_prot(struct vm_fault *vmf, pfn_t pfn, vm_fault_t vmf_insert_pfn_pmd_prot(struct vm_fault *vmf, pfn_t pfn,
pgprot_t pgprot, bool write); pgprot_t pgprot, bool write);
......
...@@ -1967,10 +1967,11 @@ extern unsigned long move_page_tables(struct vm_area_struct *vma, ...@@ -1967,10 +1967,11 @@ extern unsigned long move_page_tables(struct vm_area_struct *vma,
#define MM_CP_UFFD_WP_ALL (MM_CP_UFFD_WP | \ #define MM_CP_UFFD_WP_ALL (MM_CP_UFFD_WP | \
MM_CP_UFFD_WP_RESOLVE) MM_CP_UFFD_WP_RESOLVE)
extern unsigned long change_protection(struct vm_area_struct *vma, unsigned long start, extern unsigned long change_protection(struct mmu_gather *tlb,
struct vm_area_struct *vma, unsigned long start,
unsigned long end, pgprot_t newprot, unsigned long end, pgprot_t newprot,
unsigned long cp_flags); unsigned long cp_flags);
extern int mprotect_fixup(struct vm_area_struct *vma, extern int mprotect_fixup(struct mmu_gather *tlb, struct vm_area_struct *vma,
struct vm_area_struct **pprev, unsigned long start, struct vm_area_struct **pprev, unsigned long start,
unsigned long end, unsigned long newflags); unsigned long end, unsigned long newflags);
......
...@@ -1709,8 +1709,9 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, ...@@ -1709,8 +1709,9 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
* or if prot_numa but THP migration is not supported * or if prot_numa but THP migration is not supported
* - HPAGE_PMD_NR if protections changed and TLB flush necessary * - HPAGE_PMD_NR if protections changed and TLB flush necessary
*/ */
int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
unsigned long addr, pgprot_t newprot, unsigned long cp_flags) pmd_t *pmd, unsigned long addr, pgprot_t newprot,
unsigned long cp_flags)
{ {
struct mm_struct *mm = vma->vm_mm; struct mm_struct *mm = vma->vm_mm;
spinlock_t *ptl; spinlock_t *ptl;
...@@ -1721,6 +1722,8 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, ...@@ -1721,6 +1722,8 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
bool uffd_wp = cp_flags & MM_CP_UFFD_WP; bool uffd_wp = cp_flags & MM_CP_UFFD_WP;
bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE; bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE;
tlb_change_page_size(tlb, HPAGE_PMD_SIZE);
if (prot_numa && !thp_migration_supported()) if (prot_numa && !thp_migration_supported())
return 1; return 1;
...@@ -1819,6 +1822,9 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, ...@@ -1819,6 +1822,9 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
} }
ret = HPAGE_PMD_NR; ret = HPAGE_PMD_NR;
set_pmd_at(mm, addr, pmd, entry); set_pmd_at(mm, addr, pmd, entry);
tlb_flush_pmd_range(tlb, addr, HPAGE_PMD_SIZE);
BUG_ON(vma_is_anonymous(vma) && !preserve_write && pmd_write(entry)); BUG_ON(vma_is_anonymous(vma) && !preserve_write && pmd_write(entry));
unlock: unlock:
spin_unlock(ptl); spin_unlock(ptl);
......
...@@ -104,6 +104,7 @@ ...@@ -104,6 +104,7 @@
#include <linux/swapops.h> #include <linux/swapops.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
#include <asm/tlb.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include "internal.h" #include "internal.h"
...@@ -630,12 +631,18 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask, ...@@ -630,12 +631,18 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask,
unsigned long change_prot_numa(struct vm_area_struct *vma, unsigned long change_prot_numa(struct vm_area_struct *vma,
unsigned long addr, unsigned long end) unsigned long addr, unsigned long end)
{ {
struct mmu_gather tlb;
int nr_updated; int nr_updated;
nr_updated = change_protection(vma, addr, end, PAGE_NONE, MM_CP_PROT_NUMA); tlb_gather_mmu(&tlb, vma->vm_mm);
nr_updated = change_protection(&tlb, vma, addr, end, PAGE_NONE,
MM_CP_PROT_NUMA);
if (nr_updated) if (nr_updated)
count_vm_numa_events(NUMA_PTE_UPDATES, nr_updated); count_vm_numa_events(NUMA_PTE_UPDATES, nr_updated);
tlb_finish_mmu(&tlb);
return nr_updated; return nr_updated;
} }
#else #else
......
...@@ -33,12 +33,13 @@ ...@@ -33,12 +33,13 @@
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
#include <asm/tlb.h>
#include "internal.h" #include "internal.h"
static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, static unsigned long change_pte_range(struct mmu_gather *tlb,
unsigned long addr, unsigned long end, pgprot_t newprot, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr,
unsigned long cp_flags) unsigned long end, pgprot_t newprot, unsigned long cp_flags)
{ {
pte_t *pte, oldpte; pte_t *pte, oldpte;
spinlock_t *ptl; spinlock_t *ptl;
...@@ -49,6 +50,8 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, ...@@ -49,6 +50,8 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
bool uffd_wp = cp_flags & MM_CP_UFFD_WP; bool uffd_wp = cp_flags & MM_CP_UFFD_WP;
bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE; bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE;
tlb_change_page_size(tlb, PAGE_SIZE);
/* /*
* Can be called with only the mmap_lock for reading by * Can be called with only the mmap_lock for reading by
* prot_numa so we must check the pmd isn't constantly * prot_numa so we must check the pmd isn't constantly
...@@ -149,6 +152,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, ...@@ -149,6 +152,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
ptent = pte_mkwrite(ptent); ptent = pte_mkwrite(ptent);
} }
ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent); ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent);
tlb_flush_pte_range(tlb, addr, PAGE_SIZE);
pages++; pages++;
} else if (is_swap_pte(oldpte)) { } else if (is_swap_pte(oldpte)) {
swp_entry_t entry = pte_to_swp_entry(oldpte); swp_entry_t entry = pte_to_swp_entry(oldpte);
...@@ -234,9 +238,9 @@ static inline int pmd_none_or_clear_bad_unless_trans_huge(pmd_t *pmd) ...@@ -234,9 +238,9 @@ static inline int pmd_none_or_clear_bad_unless_trans_huge(pmd_t *pmd)
return 0; return 0;
} }
static inline unsigned long change_pmd_range(struct vm_area_struct *vma, static inline unsigned long change_pmd_range(struct mmu_gather *tlb,
pud_t *pud, unsigned long addr, unsigned long end, struct vm_area_struct *vma, pud_t *pud, unsigned long addr,
pgprot_t newprot, unsigned long cp_flags) unsigned long end, pgprot_t newprot, unsigned long cp_flags)
{ {
pmd_t *pmd; pmd_t *pmd;
unsigned long next; unsigned long next;
...@@ -276,8 +280,12 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma, ...@@ -276,8 +280,12 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
if (next - addr != HPAGE_PMD_SIZE) { if (next - addr != HPAGE_PMD_SIZE) {
__split_huge_pmd(vma, pmd, addr, false, NULL); __split_huge_pmd(vma, pmd, addr, false, NULL);
} else { } else {
int nr_ptes = change_huge_pmd(vma, pmd, addr, /*
newprot, cp_flags); * change_huge_pmd() does not defer TLB flushes,
* so no need to propagate the tlb argument.
*/
int nr_ptes = change_huge_pmd(tlb, vma, pmd,
addr, newprot, cp_flags);
if (nr_ptes) { if (nr_ptes) {
if (nr_ptes == HPAGE_PMD_NR) { if (nr_ptes == HPAGE_PMD_NR) {
...@@ -291,8 +299,8 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma, ...@@ -291,8 +299,8 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
} }
/* fall through, the trans huge pmd just split */ /* fall through, the trans huge pmd just split */
} }
this_pages = change_pte_range(vma, pmd, addr, next, newprot, this_pages = change_pte_range(tlb, vma, pmd, addr, next,
cp_flags); newprot, cp_flags);
pages += this_pages; pages += this_pages;
next: next:
cond_resched(); cond_resched();
...@@ -306,9 +314,9 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma, ...@@ -306,9 +314,9 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
return pages; return pages;
} }
static inline unsigned long change_pud_range(struct vm_area_struct *vma, static inline unsigned long change_pud_range(struct mmu_gather *tlb,
p4d_t *p4d, unsigned long addr, unsigned long end, struct vm_area_struct *vma, p4d_t *p4d, unsigned long addr,
pgprot_t newprot, unsigned long cp_flags) unsigned long end, pgprot_t newprot, unsigned long cp_flags)
{ {
pud_t *pud; pud_t *pud;
unsigned long next; unsigned long next;
...@@ -319,16 +327,16 @@ static inline unsigned long change_pud_range(struct vm_area_struct *vma, ...@@ -319,16 +327,16 @@ static inline unsigned long change_pud_range(struct vm_area_struct *vma,
next = pud_addr_end(addr, end); next = pud_addr_end(addr, end);
if (pud_none_or_clear_bad(pud)) if (pud_none_or_clear_bad(pud))
continue; continue;
pages += change_pmd_range(vma, pud, addr, next, newprot, pages += change_pmd_range(tlb, vma, pud, addr, next, newprot,
cp_flags); cp_flags);
} while (pud++, addr = next, addr != end); } while (pud++, addr = next, addr != end);
return pages; return pages;
} }
static inline unsigned long change_p4d_range(struct vm_area_struct *vma, static inline unsigned long change_p4d_range(struct mmu_gather *tlb,
pgd_t *pgd, unsigned long addr, unsigned long end, struct vm_area_struct *vma, pgd_t *pgd, unsigned long addr,
pgprot_t newprot, unsigned long cp_flags) unsigned long end, pgprot_t newprot, unsigned long cp_flags)
{ {
p4d_t *p4d; p4d_t *p4d;
unsigned long next; unsigned long next;
...@@ -339,44 +347,40 @@ static inline unsigned long change_p4d_range(struct vm_area_struct *vma, ...@@ -339,44 +347,40 @@ static inline unsigned long change_p4d_range(struct vm_area_struct *vma,
next = p4d_addr_end(addr, end); next = p4d_addr_end(addr, end);
if (p4d_none_or_clear_bad(p4d)) if (p4d_none_or_clear_bad(p4d))
continue; continue;
pages += change_pud_range(vma, p4d, addr, next, newprot, pages += change_pud_range(tlb, vma, p4d, addr, next, newprot,
cp_flags); cp_flags);
} while (p4d++, addr = next, addr != end); } while (p4d++, addr = next, addr != end);
return pages; return pages;
} }
static unsigned long change_protection_range(struct vm_area_struct *vma, static unsigned long change_protection_range(struct mmu_gather *tlb,
unsigned long addr, unsigned long end, pgprot_t newprot, struct vm_area_struct *vma, unsigned long addr,
unsigned long cp_flags) unsigned long end, pgprot_t newprot, unsigned long cp_flags)
{ {
struct mm_struct *mm = vma->vm_mm; struct mm_struct *mm = vma->vm_mm;
pgd_t *pgd; pgd_t *pgd;
unsigned long next; unsigned long next;
unsigned long start = addr;
unsigned long pages = 0; unsigned long pages = 0;
BUG_ON(addr >= end); BUG_ON(addr >= end);
pgd = pgd_offset(mm, addr); pgd = pgd_offset(mm, addr);
flush_cache_range(vma, addr, end); tlb_start_vma(tlb, vma);
inc_tlb_flush_pending(mm);
do { do {
next = pgd_addr_end(addr, end); next = pgd_addr_end(addr, end);
if (pgd_none_or_clear_bad(pgd)) if (pgd_none_or_clear_bad(pgd))
continue; continue;
pages += change_p4d_range(vma, pgd, addr, next, newprot, pages += change_p4d_range(tlb, vma, pgd, addr, next, newprot,
cp_flags); cp_flags);
} while (pgd++, addr = next, addr != end); } while (pgd++, addr = next, addr != end);
/* Only flush the TLB if we actually modified any entries: */ tlb_end_vma(tlb, vma);
if (pages)
flush_tlb_range(vma, start, end);
dec_tlb_flush_pending(mm);
return pages; return pages;
} }
unsigned long change_protection(struct vm_area_struct *vma, unsigned long start, unsigned long change_protection(struct mmu_gather *tlb,
struct vm_area_struct *vma, unsigned long start,
unsigned long end, pgprot_t newprot, unsigned long end, pgprot_t newprot,
unsigned long cp_flags) unsigned long cp_flags)
{ {
...@@ -387,7 +391,7 @@ unsigned long change_protection(struct vm_area_struct *vma, unsigned long start, ...@@ -387,7 +391,7 @@ unsigned long change_protection(struct vm_area_struct *vma, unsigned long start,
if (is_vm_hugetlb_page(vma)) if (is_vm_hugetlb_page(vma))
pages = hugetlb_change_protection(vma, start, end, newprot); pages = hugetlb_change_protection(vma, start, end, newprot);
else else
pages = change_protection_range(vma, start, end, newprot, pages = change_protection_range(tlb, vma, start, end, newprot,
cp_flags); cp_flags);
return pages; return pages;
...@@ -421,8 +425,9 @@ static const struct mm_walk_ops prot_none_walk_ops = { ...@@ -421,8 +425,9 @@ static const struct mm_walk_ops prot_none_walk_ops = {
}; };
int int
mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev, mprotect_fixup(struct mmu_gather *tlb, struct vm_area_struct *vma,
unsigned long start, unsigned long end, unsigned long newflags) struct vm_area_struct **pprev, unsigned long start,
unsigned long end, unsigned long newflags)
{ {
struct mm_struct *mm = vma->vm_mm; struct mm_struct *mm = vma->vm_mm;
unsigned long oldflags = vma->vm_flags; unsigned long oldflags = vma->vm_flags;
...@@ -509,7 +514,7 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev, ...@@ -509,7 +514,7 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
dirty_accountable = vma_wants_writenotify(vma, vma->vm_page_prot); dirty_accountable = vma_wants_writenotify(vma, vma->vm_page_prot);
vma_set_page_prot(vma); vma_set_page_prot(vma);
change_protection(vma, start, end, vma->vm_page_prot, change_protection(tlb, vma, start, end, vma->vm_page_prot,
dirty_accountable ? MM_CP_DIRTY_ACCT : 0); dirty_accountable ? MM_CP_DIRTY_ACCT : 0);
/* /*
...@@ -543,6 +548,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len, ...@@ -543,6 +548,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len,
const int grows = prot & (PROT_GROWSDOWN|PROT_GROWSUP); const int grows = prot & (PROT_GROWSDOWN|PROT_GROWSUP);
const bool rier = (current->personality & READ_IMPLIES_EXEC) && const bool rier = (current->personality & READ_IMPLIES_EXEC) &&
(prot & PROT_READ); (prot & PROT_READ);
struct mmu_gather tlb;
start = untagged_addr(start); start = untagged_addr(start);
...@@ -602,6 +608,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len, ...@@ -602,6 +608,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len,
else else
prev = vma->vm_prev; prev = vma->vm_prev;
tlb_gather_mmu(&tlb, current->mm);
for (nstart = start ; ; ) { for (nstart = start ; ; ) {
unsigned long mask_off_old_flags; unsigned long mask_off_old_flags;
unsigned long newflags; unsigned long newflags;
...@@ -628,18 +635,18 @@ static int do_mprotect_pkey(unsigned long start, size_t len, ...@@ -628,18 +635,18 @@ static int do_mprotect_pkey(unsigned long start, size_t len,
/* newflags >> 4 shift VM_MAY% in place of VM_% */ /* newflags >> 4 shift VM_MAY% in place of VM_% */
if ((newflags & ~(newflags >> 4)) & VM_ACCESS_FLAGS) { if ((newflags & ~(newflags >> 4)) & VM_ACCESS_FLAGS) {
error = -EACCES; error = -EACCES;
goto out; break;
} }
/* Allow architectures to sanity-check the new flags */ /* Allow architectures to sanity-check the new flags */
if (!arch_validate_flags(newflags)) { if (!arch_validate_flags(newflags)) {
error = -EINVAL; error = -EINVAL;
goto out; break;
} }
error = security_file_mprotect(vma, reqprot, prot); error = security_file_mprotect(vma, reqprot, prot);
if (error) if (error)
goto out; break;
tmp = vma->vm_end; tmp = vma->vm_end;
if (tmp > end) if (tmp > end)
...@@ -648,27 +655,28 @@ static int do_mprotect_pkey(unsigned long start, size_t len, ...@@ -648,27 +655,28 @@ static int do_mprotect_pkey(unsigned long start, size_t len,
if (vma->vm_ops && vma->vm_ops->mprotect) { if (vma->vm_ops && vma->vm_ops->mprotect) {
error = vma->vm_ops->mprotect(vma, nstart, tmp, newflags); error = vma->vm_ops->mprotect(vma, nstart, tmp, newflags);
if (error) if (error)
goto out; break;
} }
error = mprotect_fixup(vma, &prev, nstart, tmp, newflags); error = mprotect_fixup(&tlb, vma, &prev, nstart, tmp, newflags);
if (error) if (error)
goto out; break;
nstart = tmp; nstart = tmp;
if (nstart < prev->vm_end) if (nstart < prev->vm_end)
nstart = prev->vm_end; nstart = prev->vm_end;
if (nstart >= end) if (nstart >= end)
goto out; break;
vma = prev->vm_next; vma = prev->vm_next;
if (!vma || vma->vm_start != nstart) { if (!vma || vma->vm_start != nstart) {
error = -ENOMEM; error = -ENOMEM;
goto out; break;
} }
prot = reqprot; prot = reqprot;
} }
tlb_finish_mmu(&tlb);
out: out:
mmap_write_unlock(current->mm); mmap_write_unlock(current->mm);
return error; return error;
......
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
#include <linux/hugetlb.h> #include <linux/hugetlb.h>
#include <linux/shmem_fs.h> #include <linux/shmem_fs.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
#include <asm/tlb.h>
#include "internal.h" #include "internal.h"
static __always_inline static __always_inline
...@@ -687,6 +688,7 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, ...@@ -687,6 +688,7 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start,
atomic_t *mmap_changing) atomic_t *mmap_changing)
{ {
struct vm_area_struct *dst_vma; struct vm_area_struct *dst_vma;
struct mmu_gather tlb;
pgprot_t newprot; pgprot_t newprot;
int err; int err;
...@@ -728,8 +730,10 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, ...@@ -728,8 +730,10 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start,
else else
newprot = vm_get_page_prot(dst_vma->vm_flags); newprot = vm_get_page_prot(dst_vma->vm_flags);
change_protection(dst_vma, start, start + len, newprot, tlb_gather_mmu(&tlb, dst_mm);
change_protection(&tlb, dst_vma, start, start + len, newprot,
enable_wp ? MM_CP_UFFD_WP : MM_CP_UFFD_WP_RESOLVE); enable_wp ? MM_CP_UFFD_WP : MM_CP_UFFD_WP_RESOLVE);
tlb_finish_mmu(&tlb);
err = 0; err = 0;
out_unlock: out_unlock:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment