Commit 0a02756d authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'riscv-for-linus-6.10-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux

Pull RISC-V fixes from Palmer Dabbelt:

 - Another fix to avoid allocating pages that overlap with ERR_PTR,
   which manifests on rv32

 - A revert for the badaccess patch I incorrectly picked up an early
   version of

* tag 'riscv-for-linus-6.10-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux:
  Revert "riscv: mm: accelerate pagefault when badaccess"
  riscv: fix overlap of allocated page and PTR_ERR
parents 8d6b029e e2c79b4c
...@@ -293,8 +293,8 @@ void handle_page_fault(struct pt_regs *regs) ...@@ -293,8 +293,8 @@ void handle_page_fault(struct pt_regs *regs)
if (unlikely(access_error(cause, vma))) { if (unlikely(access_error(cause, vma))) {
vma_end_read(vma); vma_end_read(vma);
count_vm_vma_lock_event(VMA_LOCK_SUCCESS); count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
tsk->thread.bad_cause = SEGV_ACCERR; tsk->thread.bad_cause = cause;
bad_area_nosemaphore(regs, code, addr); bad_area_nosemaphore(regs, SEGV_ACCERR, addr);
return; return;
} }
......
...@@ -250,18 +250,19 @@ static void __init setup_bootmem(void) ...@@ -250,18 +250,19 @@ static void __init setup_bootmem(void)
kernel_map.va_pa_offset = PAGE_OFFSET - phys_ram_base; kernel_map.va_pa_offset = PAGE_OFFSET - phys_ram_base;
/* /*
* memblock allocator is not aware of the fact that last 4K bytes of * Reserve physical address space that would be mapped to virtual
* the addressable memory can not be mapped because of IS_ERR_VALUE * addresses greater than (void *)(-PAGE_SIZE) because:
* macro. Make sure that last 4k bytes are not usable by memblock * - This memory would overlap with ERR_PTR
* if end of dram is equal to maximum addressable memory. For 64-bit * - This memory belongs to high memory, which is not supported
* kernel, this problem can't happen here as the end of the virtual *
* address space is occupied by the kernel mapping then this check must * This is not applicable to 64-bit kernel, because virtual addresses
* be done as soon as the kernel mapping base address is determined. * after (void *)(-PAGE_SIZE) are not linearly mapped: they are
* occupied by kernel mapping. Also it is unrealistic for high memory
* to exist on 64-bit platforms.
*/ */
if (!IS_ENABLED(CONFIG_64BIT)) { if (!IS_ENABLED(CONFIG_64BIT)) {
max_mapped_addr = __pa(~(ulong)0); max_mapped_addr = __va_to_pa_nodebug(-PAGE_SIZE);
if (max_mapped_addr == (phys_ram_end - 1)) memblock_reserve(max_mapped_addr, (phys_addr_t)-max_mapped_addr);
memblock_set_current_limit(max_mapped_addr - 4096);
} }
min_low_pfn = PFN_UP(phys_ram_base); min_low_pfn = PFN_UP(phys_ram_base);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment