- 20 May, 2022 1 commit
-
-
Borislav Petkov authored
Add an explicit dependency to the respective CPU vendor so that the respective microcode support for it gets built only when that support is enabled. Reported-by:
Randy Dunlap <rdunlap@infradead.org> Signed-off-by:
Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/8ead0da9-9545-b10d-e3db-7df1a1f219e4@infradead.org
-
- 11 May, 2022 1 commit
-
-
Andy Lutomirski authored
CONFIG_LEGACY_VSYSCALL_EMULATE is, as far as I know, only needed for the combined use of exotic and outdated debugging mechanisms with outdated binaries. At this point, no one should be using it. Eventually, dynamic switching of vsyscalls will be implemented, but this is much more complicated to support in EMULATE mode than XONLY mode. So let's force all the distros off of EMULATE mode. If anyone actually needs it, they can set vsyscall=emulate, and the kernel can then get away with refusing to support newer security models if that option is set. [ bp: Remove "we"s. ] Signed-off-by:
Andy Lutomirski <luto@kernel.org> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Kees Cook <keescook@chromium.org> Acked-by:
Florian Weimer <fweimer@redhat.com> Link: https://lore.kernel.org/r/898932fe61db6a9d61bc2458fa2f6049f1ca9f5c.1652290558.git.luto@kernel.org
-
- 22 Apr, 2022 4 commits
-
-
Josh Poimboeuf authored
Remove CONFIG_NOINSTR_VALIDATION's dependency on HAVE_OBJTOOL, since other arches might want to implement objtool without it. Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Miroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/488e94f69db4df154499bc098573d90e5db1c826.1650300597.git.jpoimboe@redhat.com
-
Josh Poimboeuf authored
Objtool has some hacks in place to workaround toolchain limitations which otherwise would break no-instrumentation rules. Make the hacks explicit (and optional for other arches) by turning it into a cmdline option and kernel config option. Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Miroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/b326eeb9c33231b9dfbb925f194ed7ee40edcd7c.1650300597.git.jpoimboe@redhat.com
-
Josh Poimboeuf authored
Objtool secretly does a jump label hack to overcome the limitations of the toolchain. Make the hack explicit (and optional for other arches) by turning it into a cmdline option and kernel config option. Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Miroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/3bdcbfdd27ecb01ddec13c04bdf756a583b13d24.1650300597.git.jpoimboe@redhat.com
-
Josh Poimboeuf authored
Now that stack validation is an optional feature of objtool, add CONFIG_OBJTOOL and replace most usages of CONFIG_STACK_VALIDATION with it. CONFIG_STACK_VALIDATION can now be considered to be frame-pointer specific. CONFIG_UNWINDER_ORC is already inherently valid for live patching, so no need to "validate" it. Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Miroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/939bf3d85604b2a126412bf11af6e3bd3b872bcb.1650300597.git.jpoimboe@redhat.com
-
- 19 Apr, 2022 1 commit
-
-
Nur Hussein authored
There is only one m in becoming. Signed-off-by:
Nur Hussein <hussein@unixcat.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20220417192454.10247-1-hussein@unixcat.org
-
- 11 Apr, 2022 1 commit
-
-
Borislav Petkov authored
Commit eac61655 ("x86: Deprecate a.out support") deprecated a.out support with the promise to remove it a couple of releases later. That commit landed in v5.1. Now it is more than a couple of releases later, no one has complained so remove it. Fold in a hunk removing the reference to arch/x86/ia32/ia32_aout.c in MAINTAINERS: https://lore.kernel.org/r/20220316050828.17255-1-lukas.bulwahn@gmail.com Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220113160115.5375-1-bp@alien8.de
-
- 07 Apr, 2022 5 commits
-
-
Kirill A. Shutemov authored
Intel TDX doesn't allow VMM to directly access guest private memory. Any memory that is required for communication with the VMM must be shared explicitly. The same rule applies for any DMA to and from the TDX guest. All DMA pages have to be marked as shared pages. A generic way to achieve this without any changes to device drivers is to use the SWIOTLB framework. The previous patch ("Add support for TDX shared memory") gave TDX guests the _ability_ to make some pages shared, but did not make any pages shared. This actually marks SWIOTLB buffers *as* shared. Start returning true for cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) in TDX guests. This has several implications: - Allows the existing mem_encrypt_init() to be used for TDX which sets SWIOTLB buffers shared (aka. "decrypted"). - Ensures that all DMA is routed via the SWIOTLB mechanism (see pci_swiotlb_detect()) Stop selecting DYNAMIC_PHYSICAL_MASK directly. It will get set indirectly by selecting X86_MEM_ENCRYPT. mem_encrypt_init() is currently under an AMD-specific #ifdef. Move it to a generic area of the header. Co-developed-by:
Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Signed-off-by:
Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Signed-off-by:
Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by:
Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by:
Andi Kleen <ak@linux.intel.com> Reviewed-by:
Tony Luck <tony.luck@intel.com> Reviewed-by:
Dave Hansen <dave.hansen@linux.intel.com> Link: https://lkml.kernel.org/r/20220405232939.73860-28-kirill.shutemov@linux.intel.com
-
Sean Christopherson authored
There are a few MSRs and control register bits that the kernel normally needs to modify during boot. But, TDX disallows modification of these registers to help provide consistent security guarantees. Fortunately, TDX ensures that these are all in the correct state before the kernel loads, which means the kernel does not need to modify them. The conditions to avoid are: * Any writes to the EFER MSR * Clearing CR4.MCE This theoretically makes the guest boot more fragile. If, for instance, EFER was set up incorrectly and a WRMSR was performed, it will trigger early exception panic or a triple fault, if it's before early exceptions are set up. However, this is likely to trip up the guest BIOS long before control reaches the kernel. In any case, these kinds of problems are unlikely to occur in production environments, and developers have good debug tools to fix them quickly. Change the common boot code to work on TDX and non-TDX systems. This should have no functional effect on non-TDX systems. Signed-off-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Signed-off-by:
Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by:
Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by:
Andi Kleen <ak@linux.intel.com> Reviewed-by:
Dan Williams <dan.j.williams@intel.com> Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20220405232939.73860-24-kirill.shutemov@linux.intel.com
-
Kirill A. Shutemov authored
In TDX guests, by default memory is protected from host access. If a guest needs to communicate with the VMM (like the I/O use case), it uses a single bit in the physical address to communicate the protected/shared attribute of the given page. In the x86 ARCH code, __PHYSICAL_MASK macro represents the width of the physical address in the given architecture. It is used in creating physical PAGE_MASK for address bits in the kernel. Since in TDX guest, a single bit is used as metadata, it needs to be excluded from valid physical address bits to avoid using incorrect addresses bits in the kernel. Enable DYNAMIC_PHYSICAL_MASK to support updating the __PHYSICAL_MASK. Co-developed-by:
Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Signed-off-by:
Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Signed-off-by:
Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by:
Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by:
Andi Kleen <ak@linux.intel.com> Reviewed-by:
Tony Luck <tony.luck@intel.com> Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20220405232939.73860-6-kirill.shutemov@linux.intel.com
-
Kirill A. Shutemov authored
Confidential Computing (CC) features (like string I/O unroll support, memory encryption/decryption support, etc) are conditionally enabled in the kernel using cc_platform_has() API. Since TDX guests also need to use these CC features, extend cc_platform_has() API and add TDX guest-specific CC attributes support. CC API also provides an interface to deal with encryption mask. Extend it to cover TDX. Details about which bit in the page table entry to be used to indicate shared/private state is determined by using the TDINFO TDCALL. Signed-off-by:
Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by:
Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by:
Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by:
Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20220405232939.73860-5-kirill.shutemov@linux.intel.com
-
Kuppuswamy Sathyanarayanan authored
In preparation of extending cc_platform_has() API to support TDX guest, use CPUID instruction to detect support for TDX guests in the early boot code (via tdx_early_init()). Since copy_bootdata() is the first user of cc_platform_has() API, detect the TDX guest status before it. Define a synthetic feature flag (X86_FEATURE_TDX_GUEST) and set this bit in a valid TDX guest platform. Signed-off-by:
Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Signed-off-by:
Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by:
Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by:
Andi Kleen <ak@linux.intel.com> Reviewed-by:
Tony Luck <tony.luck@intel.com> Reviewed-by:
Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20220405232939.73860-2-kirill.shutemov@linux.intel.com
-
- 04 Apr, 2022 1 commit
-
-
Borislav Petkov authored
Those were added as part of the SMAP enablement but SMAP is currently an integral part of kernel proper and there's no need to disable it anymore. Rip out that functionality. Leave --uaccess default on for objtool as this is what objtool should do by default anyway. If still needed - clearcpuid=smap. Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Lai Jiangshan <jiangshanlai@gmail.com> Reviewed-by:
Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220127115626.14179-4-bp@alien8.de
-
- 31 Mar, 2022 1 commit
-
-
Thomas Gleixner authored
Revert commit bf9ad37d . It needs to be better encapsulated and generalized. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
-
- 29 Mar, 2022 1 commit
-
-
Masami Hiramatsu authored
Replaces the kretprobe code with rethook on x86. With this patch, kretprobe on x86 uses the rethook instead of kretprobe specific trampoline code. Signed-off-by:
Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Tested-by:
Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/164826163692.2455864.13745421016848209527.stgit@devnote2
-
- 25 Mar, 2022 1 commit
-
-
Anshuman Khandual authored
ARCH_HAS_FILTER_PGPROT config has duplicate definitions on platforms that subscribe it. Instead make it a generic config option which can be selected on applicable platforms when required. Link: https://lkml.kernel.org/r/1643004823-16441-1-git-send-email-anshuman.khandual@arm.com Signed-off-by:
Anshuman Khandual <anshuman.khandual@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 22 Mar, 2022 4 commits
-
-
Anshuman Khandual authored
ARCH_WANT_GENERAL_HUGETLB config has duplicate definitions on platforms that subscribe it. Instead make it a generic config option which can be selected on applicable platforms when required. Link: https://lkml.kernel.org/r/1643718465-4324-1-git-send-email-anshuman.khandual@arm.com Signed-off-by:
Anshuman Khandual <anshuman.khandual@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Nathan Chancellor authored
With CONFIG_X86_KERNEL_IBT=y and a version of ld.lld prior to 14.0.0, there are numerous objtool warnings along the lines of: warning: objtool: .plt+0x6: indirect jump found in RETPOLINE build This is a known issue that has been resolved in ld.lld 14.0.0. Prevent CONFIG_X86_KERNEL_IBT from being selectable when using one of these problematic ld.lld versions. Signed-off-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220318230747.3900772-3-nathan@kernel.org
-
Nathan Chancellor authored
Commit 156ff4a5 ("x86/ibt: Base IBT bits") added a check for a crash with 'clang -fcf-protection=branch -mfentry -pg', which intended to exclude Clang versions older than 14.0.0 from selecting CONFIG_X86_KERNEL_IBT. clang-11 does not have the issue that the check is testing for, so CONFIG_X86_KERNEL_IBT is selectable. Unfortunately, there is a different crash in clang-11 that was fixed in clang-12. To make matters worse, that crash does not appear to be entirely deterministic, as the same input to the compiler will sometimes crash and other times not, which makes dynamically checking for the crash like the '-pg' one unreliable. To make everything work properly for all common versions of clang, use a hard version check of 14.0.0, as that will be the first release upstream that has both bugs properly fixed. Signed-off-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220318230747.3900772-2-nathan@kernel.org
-
Alexei Starovoitov authored
This reverts commit 75caf33e . Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
- 18 Mar, 2022 1 commit
-
-
Masami Hiramatsu authored
Add rethook for x86 implementation. Most of the code has been copied from kretprobes on x86. Signed-off-by:
Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org> Tested-by:
Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/164735286243.1084943.7477055110527046644.stgit@devnote2
-
- 15 Mar, 2022 4 commits
-
-
Nathan Chancellor authored
There are two outstanding issues with CONFIG_X86_X32_ABI and llvm-objcopy, with similar root causes: 1. llvm-objcopy does not properly convert .note.gnu.property when going from x86_64 to x86_x32, resulting in a corrupted section when linking: https://github.com/ClangBuiltLinux/linux/issues/1141 2. llvm-objcopy produces corrupted compressed debug sections when going from x86_64 to x86_x32, also resulting in an error when linking: https://github.com/ClangBuiltLinux/linux/issues/514 After commit 41c5ef31ad71 ("x86/ibt: Base IBT bits"), the .note.gnu.property section is always generated when CONFIG_X86_KERNEL_IBT is enabled, which causes the first issue to become visible with an allmodconfig build: ld.lld: error: arch/x86/entry/vdso/vclock_gettime-x32.o:(.note.gnu.property+0x1c): program property is too short To avoid this error, do not allow CONFIG_X86_X32_ABI to be selected when using llvm-objcopy. If the two issues ever get fixed in llvm-objcopy, this can be turned into a feature check. Signed-off-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220314194842.3452-3-nathan@kernel.org
-
Masahiro Yamada authored
Commit 0bf62763 ("x32: Warn and disable rather than error if binutils too old") added a small test in arch/x86/Makefile because binutils 2.22 or newer is needed to properly support elf32-x86-64. This check is no longer necessary, as the minimum supported version of binutils is 2.23, which is enforced at configuration time with scripts/min-tool-version.sh. Remove this check and replace all uses of CONFIG_X86_X32 with CONFIG_X86_X32_ABI, as two symbols are no longer necessary. [nathan: Rebase, fix up a few places where CONFIG_X86_X32 was still used, and simplify commit message to satisfy -tip requirements] Signed-off-by:
Masahiro Yamada <masahiroy@kernel.org> Signed-off-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220314194842.3452-2-nathan@kernel.org
-
Peter Zijlstra authored
Objtool's --ibt option generates .ibt_endbr_seal which lists superfluous ENDBR instructions. That is those instructions for which the function is never indirectly called. Overwrite these ENDBR instructions with a NOP4 such that these function can never be indirect called, reducing the number of viable ENDBR targets in the kernel. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220308154319.822545231@infradead.org
-
Peter Zijlstra authored
Add Kconfig, Makefile and basic instruction support for x86 IBT. (Ab)use __DISABLE_EXPORTS to disable IBT since it's already employed to mark compressed and purgatory. Additionally mark realmode with it as well to avoid inserting ENDBR instructions there. While ENDBR is technically a NOP, inserting them was causing some grief due to code growth. There's also a problem with using __noendbr in code compiled without -fcf-protection=branch. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220308154317.519875203@infradead.org
-
- 04 Mar, 2022 1 commit
-
-
Oleg Nesterov authored
On x86_64 we must disable preemption before we enable interrupts for stack faults, int3 and debugging, because the current task is using a per CPU debug stack defined by the IST. If we schedule out, another task can come in and use the same stack and cause the stack to be corrupted and crash the kernel on return. When CONFIG_PREEMPT_RT is enabled, spinlock_t locks become sleeping, and one of these is the spin lock used in signal handling. Some of the debug code (int3) causes do_trap() to send a signal. This function calls a spinlock_t lock that has been converted to a sleeping lock. If this happens, the above issues with the corrupted stack is possible. Instead of calling the signal right away, for PREEMPT_RT and x86, the signal information is stored on the stacks task_struct and TIF_NOTIFY_RESUME is set. Then on exit of the trap, the signal resume code will send the signal when preemption is enabled. [ rostedt: Switched from #ifdef CONFIG_PREEMPT_RT to ARCH_RT_DELAYS_SIGNAL_SEND and added comments to the code. ] [bigeasy: Add on 32bit as per Yang Shi, minor rewording. ] [ tglx: Use a config option ] Signed-off-by:
Oleg Nesterov <oleg@redhat.com> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/Ygq5aBB/qMQw6aP5@linutronix.de
-
- 02 Mar, 2022 1 commit
-
-
Song Liu authored
kernel test robot reported kernel BUG like: [ 44.587744][ T1] kernel BUG at arch/x86/mm/physaddr.c:76! [ 44.590151][ T1] __vmalloc_area_node (mm/vmalloc.c:622 mm/vmalloc.c:2995) [ 44.590151][ T1] __vmalloc_node_range (mm/vmalloc.c:3108) [ 44.590151][ T1] __vmalloc_node (mm/vmalloc.c:3157) which is triggered with HAVE_ARCH_HUGE_VMALLOC on 32-bit x86. Since BPF only uses HAVE_ARCH_HUGE_VMALLOC for x86_64, turn it off for 32-bit x86. Fixes: fac54e2b ("x86/Kconfig: Select HAVE_ARCH_HUGE_VMALLOC with HAVE_ARCH_HUGE_VMAP") Reported-by:
kernel test robot <oliver.sang@intel.com> Signed-off-by:
Song Liu <song@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20220302175126.247459-2-song@kernel.org
-
- 28 Feb, 2022 1 commit
-
-
Mateusz Jończyk authored
In Kconfig, inside the "Processor type and features" menu, there is the CONFIG_I8K option: "Dell i8k legacy laptop support". This is very confusing - enabling CONFIG_I8K is not required for the kernel to support old Dell laptops. This option is specific to the dell-smm-hwmon driver, which mostly exports some hardware monitoring information and allows the user to change fan speed. This option is misplaced, so move CONFIG_I8K to drivers/hwmon/Kconfig, where it belongs. Also, modify the dependency order - change select SENSORS_DELL_SMM to depends on SENSORS_DELL_SMM as it is just a configuration option of dell-smm-hwmon. This includes changing the option type from tristate to bool. It was tristate because it could select CONFIG_SENSORS_DELL_SMM=m . When running "make oldconfig" on configurations with CONFIG_SENSORS_DELL_SMM enabled , this change will result in an additional question (which could be printed several times during bisecting). I think that tidying up the configuration is worth it, though. Next patch tweaks the description of CONFIG_I8K. Signed-off-by:
Mateusz Jończyk <mat.jonczyk@o2.pl> Cc: Pali Rohár <pali@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Jean Delvare <jdelvare@suse.com> Cc: Guenter Roeck <linux@roeck-us.net> Cc: Mark Gross <markgross@kernel.org> Reviewed-by:
Hans de Goede <hdegoede@redhat.com> Reviewed-by:
Randy Dunlap <rdunlap@infradead.org> Acked-by:
Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20220212125654.357408-1-mat.jonczyk@o2.pl Signed-off-by:
Guenter Roeck <linux@roeck-us.net>
-
- 26 Feb, 2022 1 commit
-
-
Kees Cook authored
One of the things that CONFIG_HARDENED_USERCOPY sanity-checks is whether an object that is about to be copied to/from userspace is overlapping the stack at all. If it is, it performs a number of inexpensive bounds checks. One of the finer-grained checks is whether an object crosses stack frames within the stack region. Doing this on x86 with CONFIG_FRAME_POINTER was cheap/easy. Doing it with ORC was deemed too heavy, and was left out (a while ago), leaving the courser whole-stack check. The LKDTM tests USERCOPY_STACK_FRAME_TO and USERCOPY_STACK_FRAME_FROM try to exercise these cross-frame cases to validate the defense is working. They have been failing ever since ORC was added (which was expected). While Muhammad was investigating various LKDTM failures[1], he asked me for additional details on them, and I realized that when exact stack frame boundary checking is not available (i.e. everything except x86 with FRAME_POINTER), it could check if a stack object is at least "current depth valid", in the sense that any object within the stack region but not between start-of-stack and current_stack_pointer should be considered unavailable (i.e. its lifetime is from a call no longer present on the stack). Introduce ARCH_HAS_CURRENT_STACK_POINTER to track which architectures have actually implemented the common global register alias. Additionally report usercopy bounds checking failures with an offset from current_stack_pointer, which may assist with diagnosing failures. The LKDTM USERCOPY_STACK_FRAME_TO and USERCOPY_STACK_FRAME_FROM tests (once slightly adjusted in a separate patch) pass again with this fixed. [1] https://github.com/kernelci/kernelci-project/issues/84 Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org Reported-by:
Muhammad Usama Anjum <usama.anjum@collabora.com> Signed-off-by:
Kees Cook <keescook@chromium.org> --- v1: https://lore.kernel.org/lkml/20220216201449.2087956-1-keescook@chromium.org v2: https://lore.kernel.org/lkml/20220224060342.1855457-1-keescook@chromium.org v3: https://lore.kernel.org/lkml/20220225173345.3358109-1-keescook@chromium.org v4: - improve commit log (akpm)
-
- 19 Feb, 2022 1 commit
-
-
Mark Rutland authored
Where an architecture selects HAVE_STATIC_CALL but not HAVE_STATIC_CALL_INLINE, each static call has an out-of-line trampoline which will either branch to a callee or return to the caller. On such architectures, a number of constraints can conspire to make those trampolines more complicated and potentially less useful than we'd like. For example: * Hardware and software control flow integrity schemes can require the addition of "landing pad" instructions (e.g. `BTI` for arm64), which will also be present at the "real" callee. * Limited branch ranges can require that trampolines generate or load an address into a register and perform an indirect branch (or at least have a slow path that does so). This loses some of the benefits of having a direct branch. * Interaction with SW CFI schemes can be complicated and fragile, e.g. requiring that we can recognise idiomatic codegen and remove indirections understand, at least until clang proves more helpful mechanisms for dealing with this. For PREEMPT_DYNAMIC, we don't need the full power of static calls, as we really only need to enable/disable specific preemption functions. We can achieve the same effect without a number of the pain points above by using static keys to fold early returns into the preemption functions themselves rather than in an out-of-line trampoline, effectively inlining the trampoline into the start of the function. For arm64, this results in good code generation. For example, the dynamic_cond_resched() wrapper looks as follows when enabled. When disabled, the first `B` is replaced with a `NOP`, resulting in an early return. | <dynamic_cond_resched>: | bti c | b <dynamic_cond_resched+0x10> // or `nop` | mov w0, #0x0 | ret | mrs x0, sp_el0 | ldr x0, [x0, #8] | cbnz x0, <dynamic_cond_resched+0x8> | paciasp | stp x29, x30, [sp, #-16]! | mov x29, sp | bl <preempt_schedule_common> | mov w0, #0x1 | ldp x29, x30, [sp], #16 | autiasp | ret ... compared to the regular form of the function: | <__cond_resched>: | bti c | mrs x0, sp_el0 | ldr x1, [x0, #8] | cbz x1, <__cond_resched+0x18> | mov w0, #0x0 | ret | paciasp | stp x29, x30, [sp, #-16]! | mov x29, sp | bl <preempt_schedule_common> | mov w0, #0x1 | ldp x29, x30, [sp], #16 | autiasp | ret Any architecture which implements static keys should be able to use this to implement PREEMPT_DYNAMIC with similar cost to non-inlined static calls. Since this is likely to have greater overhead than (inlined) static calls, PREEMPT_DYNAMIC is only defaulted to enabled when HAVE_PREEMPT_DYNAMIC_CALL is selected. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Ard Biesheuvel <ardb@kernel.org> Acked-by:
Frederic Weisbecker <frederic@kernel.org> Link: https://lore.kernel.org/r/20220214165216.2231574-6-mark.rutland@arm.com
-
- 08 Feb, 2022 1 commit
-
-
Song Liu authored
This enables module_alloc() to allocate huge page for 2MB+ requests. To check the difference of this change, we need enable config CONFIG_PTDUMP_DEBUGFS, and call module_alloc(2MB). Before the change, /sys/kernel/debug/page_tables/kernel shows pte for this map. With the change, /sys/kernel/debug/page_tables/ show pmd for thie map. Signed-off-by:
Song Liu <songliubraving@fb.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220204185742.271030-2-song@kernel.org
-
- 28 Jan, 2022 1 commit
-
-
Steven Rostedt (Google) authored
First S390 complained that the sorting of the mcount sections at build time caused the kernel to crash on their architecture. Now PowerPC is complaining about it too. And also ARM64 appears to be having issues. It may be necessary to also update the relocation table for the values in the mcount table. Not only do we have to sort the table, but also update the relocations that may be applied to the items in the table. If the system is not relocatable, then it is fine to sort, but if it is, some architectures may have issues (although x86 does not as it shifts all addresses the same). Add a HAVE_BUILDTIME_MCOUNT_SORT that an architecture can set to say it is safe to do the sorting at build time. Also update the config to compile in build time sorting in the sorttable code in scripts/ to depend on CONFIG_BUILDTIME_MCOUNT_SORT. Link: https://lore.kernel.org/all/944D10DA-8200-4BA9-8D0A-3BED9AA99F82@linux.ibm.com/ Link: https://lkml.kernel.org...
-
- 23 Jan, 2022 1 commit
-
-
David Hildenbrand authored
x86-64 supports only CONFIG_SPARSEMEM; there is nothing users can select. So enable the memory model selection (via CONFIG_ARCH_SELECT_MEMORY_MODEL) only if both, SPARSEMEM and FLATMEM are possible, which isn't the case on x86-64. Signed-off-by:
David Hildenbrand <david@redhat.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20210929144321.50411-1-david@redhat.com
-
- 20 Jan, 2022 2 commits
-
-
Marco Elver authored
Until recent versions of GCC and Clang, it was not possible to disable KCOV instrumentation via a function attribute. The relevant function attribute was introduced in 540540d0 ("kcov: add __no_sanitize_coverage to fix noinstr for all architectures"). x86 was the first architecture to want a working noinstr, and at the time no compiler support for the attribute existed yet. Therefore, commit 0f1441b4 ("objtool: Fix noinstr vs KCOV") introduced the ability to NOP __sanitizer_cov_*() calls in .noinstr.text. However, this doesn't work for other architectures like arm64 and s390 that want a working noinstr per ARCH_WANTS_NO_INSTR. At the time of 0f1441b4, we didn't yet have ARCH_WANTS_NO_INSTR, but now we can move the Kconfig dependency checks to the generic KCOV option. KCOV will be available if: - architecture does not care about noinstr, OR - we have objtool support (like on x86), OR - GCC is 12.0 or newer, OR - Clang is 13.0 or newer. Link: https://lkml.kernel.org/r/20211201152604.3984495-1-elver@google.com Signed-off-by:
Marco Elver <elver@google.com> Reviewed-by:
Nathan Chancellor <nathan@kernel.org> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Kefeng Wang authored
Patch series "mm: percpu: Cleanup percpu first chunk function". When supporting page mapping percpu first chunk allocator on arm64, we found there are lots of duplicated codes in percpu embed/page first chunk allocator. This patchset is aimed to cleanup them and should no function change. The currently supported status about 'embed' and 'page' in Archs shows below, embed: NEED_PER_CPU_PAGE_FIRST_CHUNK page: NEED_PER_CPU_EMBED_FIRST_CHUNK embed page ------------------------ arm64 Y Y mips Y N powerpc Y Y riscv Y N sparc Y Y x86 Y Y ------------------------ There are two interfaces about percpu first chunk allocator, extern int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size, size_t atom_size, pcpu_fc_cpu_distance_fn_t cpu_distance_fn, - pcpu_fc_alloc_fn_t alloc_fn, - pcpu_fc_free_fn_t free_fn); + pcpu_fc_cpu_to_node_fn_t cpu_to_nd_fn); extern int __init pcpu_page_first_chunk(size_t reserved_size, - pcpu_fc_alloc_fn_t alloc_fn, - pcpu_fc_free_fn_t free_fn, - pcpu_fc_populate_pte_fn_t populate_pte_fn); + pcpu_fc_cpu_to_node_fn_t cpu_to_nd_fn); The pcpu_fc_alloc_fn_t/pcpu_fc_free_fn_t is killed, we provide generic pcpu_fc_alloc() and pcpu_fc_free() function, which are called in the pcpu_embed/page_first_chunk(). 1) For pcpu_embed_first_chunk(), pcpu_fc_cpu_to_node_fn_t is needed to be provided when archs supported NUMA. 2) For pcpu_page_first_chunk(), the pcpu_fc_populate_pte_fn_t is killed too, a generic pcpu_populate_pte() which marked '__weak' is provided, if you need a different function to populate pte on the arch(like x86), please provide its own implementation. [1] https://github.com/kevin78/linux.git percpu-cleanup This patch (of 4): The HAVE_SETUP_PER_CPU_AREA/NEED_PER_CPU_EMBED_FIRST_CHUNK/ NEED_PER_CPU_PAGE_FIRST_CHUNK/USE_PERCPU_NUMA_NODE_ID configs, which have duplicate definitions on platforms that subscribe it. Move them into mm, drop these redundant definitions and instead just select it on applicable platforms. Link: https://lkml.kernel.org/r/20211216112359.103822-1-wangkefeng.wang@huawei.com Link: https://lkml.kernel.org/r/20211216112359.103822-2-wangkefeng.wang@huawei.com Signed-off-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64] Cc: Will Deacon <will@kernel.org> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: "David S. Miller" <davem@davemloft.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Christoph Lameter <cl@linux.com> Cc: Dennis Zhou <dennis@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: "Rafael J. Wysocki" <rafael@kernel.org> Cc: Tejun Heo <tj@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 15 Jan, 2022 2 commits
-
-
Yury Norov authored
In 5.12 cycle we enabled GENERIC_FIND_FIRST_BIT config option for ARM64 and MIPS. It increased performance and shrunk .text size; and so far I didn't receive any negative feedback on the change. https://lore.kernel.org/linux-arch/20210225135700.1381396-1-yury.norov@gmail.com/ Now I think it's a good time to switch all architectures to use find_{first,last}_bit() unconditionally, and so remove corresponding config option. The patch does't introduce functioal changes for arc, arm, arm64, mips, m68k, s390 and x86, for other architectures I expect improvement both in performance and .text size. Signed-off-by:
Yury Norov <yury.norov@gmail.com> Tested-by: Alexander Lobakin <alobakin@pm.me> (mips) Reviewed-by: Alexander Lobakin <alobakin@pm.me> (mips) Reviewed-by:
Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by:
Will Deacon <will@kernel.org> Tested-by:
Wolfram Sang <wsa+renesas@sang-engineering.com>
-
Pasha Tatashin authored
Add page table check hooks into routines that modify user page tables. Link: https://lkml.kernel.org/r/20211221154650.1047963-5-pasha.tatashin@soleen.com Signed-off-by:
Pasha Tatashin <pasha.tatashin@soleen.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Rientjes <rientjes@google.com> Cc: Frederic Weisbecker <frederic@kernel.org> Cc: Greg Thelen <gthelen@google.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Hugh Dickins <hughd@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Slaby <jirislaby@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Kees Cook <keescook@chromium.org> Cc: Masahiro Yamada <masahiroy@kernel.org> Cc: Mike Rapoport <rppt@kernel.org> Cc: Muchun Song <songmuchun@bytedance.com> Cc: Paul Turner <pjt@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wei Xu <weixugc@google.com> Cc: Will Deacon <will@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 11 Dec, 2021 1 commit
-
-
Peter Zijlstra authored
This code puts an exception table entry on the PREFETCH instruction to overwrite it with a JMP.d8 when it triggers an exception. Except of course, our code is no longer writable, also SMP. Instead of fixing this broken mess, simply take it out. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/YZKQzUmeNuwyvZpk@hirez.programming.kicks-ass.net
-
- 09 Dec, 2021 1 commit
-
-
Jarkko Sakkinen authored
== Problem == The amount of SGX memory on a system is determined by the BIOS and it varies wildly between systems. It can be as small as dozens of MB's and as large as many GB's on servers. Just like how applications need to know how much regular RAM is available, enclave builders need to know how much SGX memory an enclave can consume. == Solution == Introduce a new sysfs file: /sys/devices/system/node/nodeX/x86/sgx_total_bytes to enumerate the amount of SGX memory available in each NUMA node. This serves the same function for SGX as /proc/meminfo or /sys/devices/system/node/nodeX/meminfo does for normal RAM. 'sgx_total_bytes' is needed today to help drive the SGX selftests. SGX-specific swap code is exercised by creating overcommitted enclaves which are larger than the physical SGX memory on the system. They currently use a CPUID-based approach which can diverge from the actual amount of SGX memory available. 'sgx_total_bytes' ensures that the selftests can work efficiently and do not attempt stupid things like creating a 100,000 MB enclave on a system with 128 MB of SGX memory. == Implementation Details == Introduce CONFIG_HAVE_ARCH_NODE_DEV_GROUP opt-in flag to expose an arch specific attribute group, and add an attribute for the amount of SGX memory in bytes to each NUMA node: == ABI Design Discussion == As opposed to the per-node ABI, a single, global ABI was considered. However, this would prevent enclaves from being able to size themselves so that they fit on a single NUMA node. Essentially, a single value would rule out NUMA optimizations for enclaves. Create a new "x86/" directory inside each "nodeX/" sysfs directory. 'sgx_total_bytes' is expected to be the first of at least a few sgx-specific files to be placed in the new directory. Just scanning /proc/meminfo, these are the no-brainers that we have for RAM, but we need for SGX: MemTotal: xxxx kB // sgx_total_bytes (implemented here) MemFree: yyyy kB // sgx_free_bytes SwapTotal: zzzz kB // sgx_swapped_bytes So, at *least* three. I think we will eventually end up needing something more along the lines of a dozen. A new directory (as opposed to being in the nodeX/ "root") directory avoids cluttering the root with several "sgx_*" files. Place the new file in a new "nodeX/x86/" directory because SGX is highly x86-specific. It is very unlikely that any other architecture (or even non-Intel x86 vendor) will ever implement SGX. Using "sgx/" as opposed to "x86/" was also considered. But, there is a real chance this can get used for other arch-specific purposes. [ dhansen: rewrite changelog ] Signed-off-by:
Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by:
Dave Hansen <dave.hansen@linux.intel.com> Acked-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Acked-by:
Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20211116162116.93081-2-jarkko@kernel.org
-