Commit 6ea02ee4 authored by Kefeng Wang's avatar Kefeng Wang Committed by Andrew Morton

arm64: mm: cleanup __do_page_fault()

Patch series "arch/mm/fault: accelerate pagefault when badaccess", v2.

After VMA lock-based page fault handling enabled, if bad access met
under per-vma lock, it will fallback to mmap_lock-based handling,
so it leads to unnessary mmap lock and vma find again. A test from
lmbench shows 34% improve after this changes on arm64,

  lat_sig -P 1 prot lat_sig 0.29194 -> 0.19198


This patch (of 7):

The __do_page_fault() only calls handle_mm_fault() after vm_flags checked,
and it is only called by do_page_fault(), let's squash it into
do_page_fault() to cleanup code.

Link: https://lkml.kernel.org/r/20240403083805.1818160-1-wangkefeng.wang@huawei.com
Link: https://lkml.kernel.org/r/20240403083805.1818160-2-wangkefeng.wang@huawei.comSigned-off-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: default avatarSuren Baghdasaryan <surenb@google.com>
Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 3931b871
...@@ -486,25 +486,6 @@ static void do_bad_area(unsigned long far, unsigned long esr, ...@@ -486,25 +486,6 @@ static void do_bad_area(unsigned long far, unsigned long esr,
} }
} }
#define VM_FAULT_BADMAP ((__force vm_fault_t)0x010000)
#define VM_FAULT_BADACCESS ((__force vm_fault_t)0x020000)
static vm_fault_t __do_page_fault(struct mm_struct *mm,
struct vm_area_struct *vma, unsigned long addr,
unsigned int mm_flags, unsigned long vm_flags,
struct pt_regs *regs)
{
/*
* Ok, we have a good vm_area for this memory access, so we can handle
* it.
* Check that the permissions on the VMA allow for the fault which
* occurred.
*/
if (!(vma->vm_flags & vm_flags))
return VM_FAULT_BADACCESS;
return handle_mm_fault(vma, addr, mm_flags, regs);
}
static bool is_el0_instruction_abort(unsigned long esr) static bool is_el0_instruction_abort(unsigned long esr)
{ {
return ESR_ELx_EC(esr) == ESR_ELx_EC_IABT_LOW; return ESR_ELx_EC(esr) == ESR_ELx_EC_IABT_LOW;
...@@ -519,6 +500,9 @@ static bool is_write_abort(unsigned long esr) ...@@ -519,6 +500,9 @@ static bool is_write_abort(unsigned long esr)
return (esr & ESR_ELx_WNR) && !(esr & ESR_ELx_CM); return (esr & ESR_ELx_WNR) && !(esr & ESR_ELx_CM);
} }
#define VM_FAULT_BADMAP ((__force vm_fault_t)0x010000)
#define VM_FAULT_BADACCESS ((__force vm_fault_t)0x020000)
static int __kprobes do_page_fault(unsigned long far, unsigned long esr, static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
struct pt_regs *regs) struct pt_regs *regs)
{ {
...@@ -617,7 +601,10 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, ...@@ -617,7 +601,10 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
goto done; goto done;
} }
fault = __do_page_fault(mm, vma, addr, mm_flags, vm_flags, regs); if (!(vma->vm_flags & vm_flags))
fault = VM_FAULT_BADACCESS;
else
fault = handle_mm_fault(vma, addr, mm_flags, regs);
/* Quick path to respond to signals */ /* Quick path to respond to signals */
if (fault_signal_pending(fault, regs)) { if (fault_signal_pending(fault, regs)) {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment