Commit 577c2b35 authored by Will Deacon's avatar Will Deacon

arm64: memory: Ensure address tag is masked in conversion macros

When converting a linear virtual address to a physical address, pfn or
struct page *, we must make sure that the tag bits are masked before the
calculation otherwise we end up with corrupt pointers when running with
CONFIG_KASAN_SW_TAGS=y:

  | Unable to handle kernel paging request at virtual address 0037fe0007580d08
  | [0037fe0007580d08] address between user and kernel address ranges

Mask out the tag in __virt_to_phys_nodebug() and virt_to_page().
Reported-by: default avatarQian Cai <cai@lca.pw>
Reported-by: default avatarGeert Uytterhoeven <geert@linux-m68k.org>
Tested-by: default avatarSteve Capper <steve.capper@arm.com>
Reviewed-by: default avatarSteve Capper <steve.capper@arm.com>
Tested-by: default avatarGeert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
Fixes: 9cb1c5dd ("arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START")
Signed-off-by: default avatarWill Deacon <will@kernel.org>
parent 68dd8ef3
......@@ -252,7 +252,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
#define __kimg_to_phys(addr) ((addr) - kimage_voffset)
#define __virt_to_phys_nodebug(x) ({ \
phys_addr_t __x = (phys_addr_t)(x); \
phys_addr_t __x = (phys_addr_t)(__tag_reset(x)); \
__is_lm_address(__x) ? __lm_to_phys(__x) : \
__kimg_to_phys(__x); \
})
......@@ -324,7 +324,8 @@ static inline void *phys_to_virt(phys_addr_t x)
((void *)__addr_tag); \
})
#define virt_to_page(vaddr) ((struct page *)((__virt_to_pgoff(vaddr)) + VMEMMAP_START))
#define virt_to_page(vaddr) \
((struct page *)((__virt_to_pgoff(__tag_reset(vaddr))) + VMEMMAP_START))
#endif
#define virt_addr_valid(addr) ({ \
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment