Commit 203b1152 authored by Amit Daniel Kachhap's avatar Amit Daniel Kachhap Committed by Will Deacon

arm64/crash_core: Export KERNELPACMASK in vmcoreinfo

Recently arm64 linux kernel added support for Armv8.3-A Pointer
Authentication feature. If this feature is enabled in the kernel and the
hardware supports address authentication then the return addresses are
signed and stored in the stack to prevent ROP kind of attack. Kdump tool
will now dump the kernel with signed lr values in the stack.

Any user analysis tool for this kernel dump may need the kernel pac mask
information in vmcoreinfo to generate the correct return address for
stacktrace purpose as well as to resolve the symbol name.

This patch is similar to commit ec6e822d ("arm64: expose user PAC
bit positions via ptrace") which exposes pac mask information via ptrace
interfaces.

The config gaurd ARM64_PTR_AUTH is removed form asm/compiler.h so macros
like ptrauth_kernel_pac_mask can be used ungaurded. This config protection
is confusing as the pointer authentication feature may be missing at
runtime even though this config is present.
Signed-off-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/1589202116-18265-1-git-send-email-amit.kachhap@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
parent 62a679cb
...@@ -2,8 +2,6 @@ ...@@ -2,8 +2,6 @@
#ifndef __ASM_COMPILER_H #ifndef __ASM_COMPILER_H
#define __ASM_COMPILER_H #define __ASM_COMPILER_H
#if defined(CONFIG_ARM64_PTR_AUTH)
/* /*
* The EL0/EL1 pointer bits used by a pointer authentication code. * The EL0/EL1 pointer bits used by a pointer authentication code.
* This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply. * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply.
...@@ -19,6 +17,4 @@ ...@@ -19,6 +17,4 @@
#define __builtin_return_address(val) \ #define __builtin_return_address(val) \
(void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val))) (void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val)))
#endif /* CONFIG_ARM64_PTR_AUTH */
#endif /* __ASM_COMPILER_H */ #endif /* __ASM_COMPILER_H */
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
*/ */
#include <linux/crash_core.h> #include <linux/crash_core.h>
#include <asm/cpufeature.h>
#include <asm/memory.h> #include <asm/memory.h>
void arch_crash_save_vmcoreinfo(void) void arch_crash_save_vmcoreinfo(void)
...@@ -16,4 +17,7 @@ void arch_crash_save_vmcoreinfo(void) ...@@ -16,4 +17,7 @@ void arch_crash_save_vmcoreinfo(void)
vmcoreinfo_append_str("NUMBER(PHYS_OFFSET)=0x%llx\n", vmcoreinfo_append_str("NUMBER(PHYS_OFFSET)=0x%llx\n",
PHYS_OFFSET); PHYS_OFFSET);
vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset()); vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
vmcoreinfo_append_str("NUMBER(KERNELPACMASK)=0x%llx\n",
system_supports_address_auth() ?
ptrauth_kernel_pac_mask() : 0);
} }
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment