• Ard Biesheuvel's avatar
    arm64: mm: use correct mapping granularity under DEBUG_RODATA · 4fee9f36
    Ard Biesheuvel authored
    When booting a 64k pages kernel that is built with CONFIG_DEBUG_RODATA
    and resides at an offset that is not a multiple of 512 MB, the rounding
    that occurs in __map_memblock() and fixup_executable() results in
    incorrect regions being mapped.
    
    The following snippet from /sys/kernel/debug/kernel_page_tables shows
    how, when the kernel is loaded 2 MB above the base of DRAM at 0x40000000,
    the first 2 MB of memory (which may be inaccessible from non-secure EL1
    or just reserved by the firmware) is inadvertently mapped into the end of
    the module region.
    
      ---[ Modules start ]---
      0xfffffdffffe00000-0xfffffe0000000000     2M RW NX ... UXN MEM/NORMAL
      ---[ Modules end ]---
      ---[ Kernel Mapping ]---
      0xfffffe0000000000-0xfffffe0000090000   576K RW NX ... UXN MEM/NORMAL
      0xfffffe0000090000-0xfffffe0000200000  1472K ro x  ... UXN MEM/NORMAL
      0xfffffe0000200000-0xfffffe0000800000     6M ro x  ... UXN MEM/NORMAL
      0xfffffe0000800000-0xfffffe0000810000    64K ro x  ... UXN MEM/NORMAL
      0xfffffe0000810000-0xfffffe0000a00000  1984K RW NX ... UXN MEM/NORMAL
      0xfffffe0000a00000-0xfffffe00ffe00000  4084M RW NX ... UXN MEM/NORMAL
    
    The same issue is likely to occur on 16k pages kernels whose load
    address is not a multiple of 32 MB (i.e., SECTION_SIZE). So round to
    SWAPPER_BLOCK_SIZE instead of SECTION_SIZE.
    
    Fixes: da141706 ("arm64: add better page protections to arm64")
    Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
    Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
    Acked-by: default avatarLaura Abbott <labbott@redhat.com>
    Cc: <stable@vger.kernel.org> # 4.0+
    Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
    4fee9f36
mmu.c 18.8 KB