- 25 Jul, 2022 8 commits
-
-
Will Deacon authored
* for-next/kpti: arm64: correct the effect of mitigations off on kpti arm64: entry: simplify trampoline data page arm64: mm: install KPTI nG mappings with MMU enabled arm64: kpti-ng: simplify page table traversal logic
-
Will Deacon authored
* for-next/kcsan: arm64: kcsan: Support detecting more missing memory barriers asm-generic: Add memory barrier dma_mb()
-
Will Deacon authored
* for-next/irqflags-nmi: arm64: select TRACE_IRQFLAGS_NMI_SUPPORT arch: make TRACE_IRQFLAGS_NMI_SUPPORT generic
-
Will Deacon authored
* for-next/ioremap: arm64: Add HAVE_IOREMAP_PROT support arm64: mm: Convert to GENERIC_IOREMAP mm: ioremap: Add ioremap/iounmap_allowed() mm: ioremap: Setup phys_addr of struct vm_struct mm: ioremap: Use more sensible name in ioremap_prot() ARM: mm: kill unused runtime hook arch_iounmap()
-
Will Deacon authored
* for-next/extable: arm64: extable: cleanup redundant extable type EX_TYPE_FIXUP arm64: extable: move _cond_extable to _cond_uaccess_extable arm64: extable: make uaaccess helper use extable type EX_TYPE_UACCESS_ERR_ZERO arm64: asm-extable: add asm uacess helpers arm64: asm-extable: move data fields arm64: extable: add new extable type EX_TYPE_KACCESS_ERR_ZERO support
-
Will Deacon authored
* for-next/errata: arm64: errata: Remove AES hwcap for COMPAT tasks arm64: errata: Add Cortex-A510 to the repeat tlbi list
-
Will Deacon authored
* for-next/docs: Documentation/arm64: update memory layout table.
-
Will Deacon authored
* for-next/cpuidle: arm64: cpuidle: remove generic cpuidle support cpuidle: cpuidle-arm: remove arm64 support
-
- 19 Jul, 2022 1 commit
-
-
James Morse authored
Cortex-A57 and Cortex-A72 have an erratum where an interrupt that occurs between a pair of AES instructions in aarch32 mode may corrupt the ELR. The task will subsequently produce the wrong AES result. The AES instructions are part of the cryptographic extensions, which are optional. User-space software will detect the support for these instructions from the hwcaps. If the platform doesn't support these instructions a software implementation should be used. Remove the hwcap bits on affected parts to indicate user-space should not use the AES instructions. Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20220714161523.279570-3-james.morse@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
- 05 Jul, 2022 1 commit
-
-
James Morse authored
Cortex-A510 is affected by an erratum where in rare circumstances the CPUs may not handle a race between a break-before-make sequence on one CPU, and another CPU accessing the same page. This could allow a store to a page that has been unmapped. Work around this by adding the affected CPUs to the list that needs TLB sequences to be done twice. Signed-off-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20220704155732.21216-1-james.morse@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
- 28 Jun, 2022 7 commits
-
-
Liu Song authored
If KASLR is enabled, then kpti will be forced to be enabled even if mitigations off, so we need to adjust the description of this parameter. Signed-off-by: Liu Song <liusong@linux.alibaba.com> Link: https://lore.kernel.org/r/1656033648-84181-1-git-send-email-liusong@linux.alibaba.comSigned-off-by: Will Deacon <will@kernel.org>
-
Tong Tiangen authored
Currently, extable type EX_TYPE_FIXUP is no place to use, We can safely remove it. Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20220621072638.1273594-7-tongtiangen@huawei.comSigned-off-by: Will Deacon <will@kernel.org>
-
Tong Tiangen authored
Currently, We use _cond_extable for cache maintenance uaccess helper caches_clean_inval_user_pou(), so this should be moved over to EX_TYPE_UACCESS_ERR_ZERO and rename _cond_extable to _cond_uaccess_extable for clarity. Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20220621072638.1273594-6-tongtiangen@huawei.comSigned-off-by: Will Deacon <will@kernel.org>
-
Tong Tiangen authored
Currnetly, the extable type used by __arch_copy_from/to_user() is EX_TYPE_FIXUP. In fact, It is more clearly to use meaningful EX_TYPE_UACCESS_*. Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com> Link: https://lore.kernel.org/r/20220621072638.1273594-5-tongtiangen@huawei.comSigned-off-by: Will Deacon <will@kernel.org>
-
Mark Rutland authored
In subsequent patches we want to explciitly annotate uaccess fixups in assembly files. We have existing helpers for this for inline assembly, but due to differing stringification requirements it's not possible to have a single definition that we can use for both inline asm and plain asm files. So as with other cases (e.g. gpr-regnum.h), we must prove separate helprs for plain asm and inline asm. So that we can do so, this patch adds helpers to define EX_TYPE_UACCESS_ERR_ZERO fixups in plain assembly. These correspond 1-1 with the inline assembly versions except for the absence of stringification. No plain assmebly heleprs are added for EX_TYPE_LOAD_UNALIGNED_ZEROPAD fixups as these only exist for a single C function. For copy_{to,from}_user() we'll need fixups with regs and err, so I've added _ASM_EXTABLE_UACCESS(insn, fixup), where both the error and zero registers are WZR. For clarity, the existing `_asm_extable` assemgbly maco is now defined in terms of the _ASM_EXTABLE() CPP macro, making the CPP macros canonical in all cases. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com> Link: https://lore.kernel.org/r/20220621072638.1273594-4-tongtiangen@huawei.comSigned-off-by: Will Deacon <will@kernel.org>
-
Mark Rutland authored
In subsequent patches we'll need to fill in extable data fields in regular assembly files. In preparation for this, move the definitions of the extable data fields earlier in asm-extable.h so that they are defined for both assembly and C files. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com> Link: https://lore.kernel.org/r/20220621072638.1273594-3-tongtiangen@huawei.comSigned-off-by: Will Deacon <will@kernel.org>
-
Tong Tiangen authored
Currently, The extable type EX_TYPE_UACCESS_ERR_ZERO is used by __get/put_kernel_nofault(), but those helpers are not uaccess type, so we add a new extable type EX_TYPE_KACCESS_ERR_ZERO which can be used by __get/put_kernel_no_fault(). This is also to prepare for distinguishing the two types in machine check safe process. Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20220621072638.1273594-2-tongtiangen@huawei.comSigned-off-by: Will Deacon <will@kernel.org>
-
- 27 Jun, 2022 6 commits
-
-
Kefeng Wang authored
With ioremap_prot() definition from generic ioremap, also move pte_pgprot() from hugetlbpage.c into pgtable.h, then arm64 could have HAVE_IOREMAP_PROT, which will enable generic_access_phys() code, it is useful for debug, eg, gdb. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Link: https://lore.kernel.org/r/20220607125027.44946-7-wangkefeng.wang@huawei.comSigned-off-by: Will Deacon <will@kernel.org>
-
Kefeng Wang authored
Add hook for arm64's special operation when ioremap(), then ioremap_wc/np/cache is converted to use ioremap_prot() from GENERIC_IOREMAP, update the Copyright and kill the unused inclusions. Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Link: https://lore.kernel.org/r/20220607125027.44946-6-wangkefeng.wang@huawei.comSigned-off-by: Will Deacon <will@kernel.org>
-
Kefeng Wang authored
Add special hook for architecture to verify addr, size or prot when ioremap() or iounmap(), which will make the generic ioremap more useful. ioremap_allowed() return a bool, - true means continue to remap - false means skip remap and return directly iounmap_allowed() return a bool, - true means continue to vunmap - false code means skip vunmap and return directly Meanwhile, only vunmap the address when it is in vmalloc area as the generic ioremap only returns vmalloc addresses. Acked-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Baoquan He <bhe@redhat.com> Link: https://lore.kernel.org/r/20220607125027.44946-5-wangkefeng.wang@huawei.comSigned-off-by: Will Deacon <will@kernel.org>
-
Kefeng Wang authored
Show physical address of each ioremap in /proc/vmallocinfo. Acked-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Link: https://lore.kernel.org/r/20220607125027.44946-4-wangkefeng.wang@huawei.comSigned-off-by: Will Deacon <will@kernel.org>
-
Kefeng Wang authored
Use more meaningful and sensible naming phys_addr instead addr in ioremap_prot(). Suggested-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by: Baoquan He <bhe@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220610092255.32445-1-wangkefeng.wang@huawei.comSigned-off-by: Will Deacon <will@kernel.org>
-
Kefeng Wang authored
Since the following commits, v5.4 commit 59d3ae9a ("ARM: remove Intel iop33x and iop13xx support") v5.11 commit 3e3f354b ("ARM: remove ebsa110 platform") The runtime hook arch_iounmap() on ARM is useless, kill arch_iounmap() and __iounmap(). Cc: Russell King <linux@armlinux.org.uk> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reported-by: kernel test robot <lkp@intel.com> Link: https://lore.kernel.org/r/20220607125027.44946-2-wangkefeng.wang@huawei.comSigned-off-by: Will Deacon <will@kernel.org>
-
- 24 Jun, 2022 1 commit
-
-
Ard Biesheuvel authored
Get rid of some clunky open coded arithmetic on section addresses, by emitting the trampoline data variables into a separate, dedicated r/o data section, and putting it at the next page boundary. This way, we can access the literals via single LDR instruction. While at it, get rid of other, implicit literals, and use ADRP/ADD or MOVZ/MOVK sequences, as appropriate. Note that the latter are only supported for CONFIG_RELOCATABLE=n (which is usually the case if CONFIG_RANDOMIZE_BASE=n), so update the CPP conditionals to reflect this. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220622161010.3845775-1-ardb@kernel.orgSigned-off-by: Will Deacon <will@kernel.org>
-
- 23 Jun, 2022 9 commits
-
-
Andre Mueller authored
Commit b89ddf4c("arm64/bpf: Remove 128MB limit for BPF JIT programs") removes the bpf jit region from the memory layout of the Aarch64 architecture. However, it forgets to update the documentation accordingly. - Remove the bpf jit region. - Fix the Start and End addresses of the modules region. - Fix the Start address of the vmalloc region. Signed-off-by: Andre Mueller <am@emlix.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20220621081651.61755-1-am@emlix.comSigned-off-by: Will Deacon <will@kernel.org>
-
Kefeng Wang authored
As "kcsan: Support detecting a subset of missing memory barriers"[1] introduced KCSAN_STRICT/KCSAN_WEAK_MEMORY which make kcsan detects more missing memory barrier, but arm64 don't have KCSAN instrumentation for barriers, so the new selftest test_barrier() and test cases for memory barrier instrumentation in kcsan_test module will fail, even panic on selftest. Let's prefix all barriers with __ on arm64, as asm-generic/barriers.h defined the final instrumented version of these barriers, which will fix the above issues. Note, barrier instrumentation that can be disabled via __no_kcsan with appropriate compiler-support (and not just with objtool help), see commit bd3d5bd1 ("kcsan: Support WEAK_MEMORY with Clang where no objtool support exists"), it adds disable_sanitizer_instrumentation to __no_kcsan attribute which will remove all sanitizer instrumentation fully (with Clang 14.0). Meanwhile, GCC does the same thing with no_sanitize. [1] https://lore.kernel.org/linux-mm/20211130114433.2580590-1-elver@google.com/Acked-by: Marco Elver <elver@google.com> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20220523113126.171714-3-wangkefeng.wang@huawei.comSigned-off-by: Will Deacon <will@kernel.org>
-
Kefeng Wang authored
The memory barrier dma_mb() is introduced by commit a76a3777 ("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"), which is used to ensure that prior (both reads and writes) accesses to memory by a CPU are ordered w.r.t. a subsequent MMIO write. Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by: Marco Elver <elver@google.com> Link: https://lore.kernel.org/r/20220523113126.171714-2-wangkefeng.wang@huawei.comSigned-off-by: Will Deacon <will@kernel.org>
-
Ard Biesheuvel authored
In cases where we unmap the kernel while running in user space, we rely on ASIDs to distinguish the minimal trampoline from the full kernel mapping, and this means we must use non-global attributes for those mappings, to ensure they are scoped by ASID and will not hit in the TLB inadvertently. We only do this when needed, as this is generally more costly in terms of TLB pressure, and so we boot without these non-global attributes, and apply them to all existing kernel mappings once all CPUs are up and we know whether or not the non-global attributes are needed. At this point, we cannot simply unmap and remap the entire address space, so we have to update all existing block and page descriptors in place. Currently, we go through a lot of trouble to perform these updates with the MMU and caches off, to avoid violating break before make (BBM) rules imposed by the architecture. Since we make changes to page tables that are not covered by the ID map, we gain access to those descriptors by disabling translations altogether. This means that the stores to memory are issued with device attributes, and require extra care in terms of coherency, which is costly. We also rely on the ID map to access a shared flag, which requires the ID map to be executable and writable at the same time, which is another thing we'd prefer to avoid. So let's switch to an approach where we replace the kernel mapping with a minimal mapping of a few pages that can be used for a minimal, ad-hoc fixmap that we can use to map each page table in turn as we traverse the hierarchy. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220609174320.4035379-3-ardb@kernel.orgSigned-off-by: Will Deacon <will@kernel.org>
-
Ard Biesheuvel authored
Simplify the KPTI G-to-nG asm helper code by: - pulling the 'table bit' test into the get/put macros so we can combine them and incorporate the entire loop; - moving the 'table bit' test after the update of bit #11 so we no longer need separate next_xxx and skip_xxx labels; - redefining the pmd/pud register aliases and the next_pmd/next_pud labels instead of branching to them if the number of configured page table levels is less than 3 or 4, respectively. No functional change intended, except for the fact that we now descend into a next level table after setting bit #11 on its descriptor but this should make no difference in practice. While at it, switch to .L prefixed local labels so they don't clutter up the symbol tables, kallsyms, etc, and clean up the indentation for legibility. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20220609174320.4035379-2-ardb@kernel.orgSigned-off-by: Will Deacon <will@kernel.org>
-
Mark Rutland authored
Due to an oversight, on arm64 lockdep IRQ state tracking doesn't work as intended in NMI context. This demonstrably results in bogus warnings from lockdep, and in theory could mask a variety of issues. On arm64, we've consistently tracked IRQ flag state for NMIs (and saved/restored the state of the interrupted context) since commit: f0cd5ac1 ("arm64: entry: fix NMI {user, kernel}->kernel transitions") That commit fixed most lockdep issues with NMI by virtue of the save/restore of the lockdep state of the interrupted context. However, for lockdep IRQ state tracking to consistently take effect in NMI context it has been necessary to select TRACE_IRQFLAGS_NMI_SUPPORT since commit: ed004953 ("locking/lockdep: Fix TRACE_IRQFLAGS vs. NMIs") As arm64 does not select TRACE_IRQFLAGS_NMI_SUPPORT, this means that the lockdep state can be stale in NMI context, and some uses of that state can consume stale data. When an NMI is taken arm64 entry code will call arm64_enter_nmi(). This will enter NMI context via __nmi_enter() before calling lockdep_hardirqs_off() to inform lockdep that IRQs have been masked. Where TRACE_IRQFLAGS_NMI_SUPPORT is not selected, lockdep_hardirqs_off() will not update lockdep state if called in NMI context. Thus if IRQs were enabled in the original context, lockdep will continue to believe that IRQs are enabled despite the call to lockdep_hardirqs_off(). However, the lockdep_assert_*() checks do take effect in NMI context, and will consume the stale lockdep state. If an NMI is taken from a context which had IRQs enabled, and during the handling of the NMI something calls lockdep_assert_irqs_disabled(), this will result in a spurious warning based upon the stale lockdep state. This can be seen when using perf with GICv3 pseudo-NMIs. Within the perf NMI handler we may attempt a uaccess to record the userspace callchain, and is this faults the el1_abort() call in the nested context will call exit_to_kernel_mode() when returning, which has a lockdep_assert_irqs_disabled() assertion: | # ./perf record -a -g sh | ------------[ cut here ]------------ | WARNING: CPU: 0 PID: 164 at arch/arm64/kernel/entry-common.c:73 exit_to_kernel_mode+0x118/0x1ac | Modules linked in: | CPU: 0 PID: 164 Comm: perf Not tainted 5.18.0-rc5 #1 | Hardware name: linux,dummy-virt (DT) | pstate: 004003c5 (nzcv DAIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--) | pc : exit_to_kernel_mode+0x118/0x1ac | lr : el1_abort+0x80/0xbc | sp : ffff8000080039f0 | pmr_save: 000000f0 | x29: ffff8000080039f0 x28: ffff6831054e4980 x27: ffff683103adb400 | x26: 0000000000000000 x25: 0000000000000001 x24: 0000000000000001 | x23: 00000000804000c5 x22: 00000000000000c0 x21: 0000000000000001 | x20: ffffbd51e635ec44 x19: ffff800008003a60 x18: 0000000000000000 | x17: ffffaadf98d23000 x16: ffff800008004000 x15: 0000ffffd14f25c0 | x14: 0000000000000000 x13: 00000000000018eb x12: 0000000000000040 | x11: 000000000000001e x10: 000000002b820020 x9 : 0000000100110000 | x8 : 000000000045cac0 x7 : 0000ffffd14f25c0 x6 : ffffbd51e639b000 | x5 : 00000000000003e5 x4 : ffffbd51e58543b0 x3 : 0000000000000001 | x2 : ffffaadf98d23000 x1 : ffff6831054e4980 x0 : 0000000100110000 | Call trace: | exit_to_kernel_mode+0x118/0x1ac | el1_abort+0x80/0xbc | el1h_64_sync_handler+0xa4/0xd0 | el1h_64_sync+0x74/0x78 | __arch_copy_from_user+0xa4/0x230 | get_perf_callchain+0x134/0x1e4 | perf_callchain+0x7c/0xa0 | perf_prepare_sample+0x414/0x660 | perf_event_output_forward+0x80/0x180 | __perf_event_overflow+0x70/0x13c | perf_event_overflow+0x1c/0x30 | armv8pmu_handle_irq+0xe8/0x160 | armpmu_dispatch_irq+0x2c/0x70 | handle_percpu_devid_fasteoi_nmi+0x7c/0xbc | generic_handle_domain_nmi+0x3c/0x60 | gic_handle_irq+0x1dc/0x310 | call_on_irq_stack+0x2c/0x54 | do_interrupt_handler+0x80/0x94 | el1_interrupt+0xb0/0xe4 | el1h_64_irq_handler+0x18/0x24 | el1h_64_irq+0x74/0x78 | lockdep_hardirqs_off+0x50/0x120 | trace_hardirqs_off+0x38/0x214 | _raw_spin_lock_irq+0x98/0xa0 | pipe_read+0x1f8/0x404 | new_sync_read+0x140/0x150 | vfs_read+0x190/0x1dc | ksys_read+0xdc/0xfc | __arm64_sys_read+0x20/0x30 | invoke_syscall+0x48/0x114 | el0_svc_common.constprop.0+0x158/0x17c | do_el0_svc+0x28/0x90 | el0_svc+0x60/0x150 | el0t_64_sync_handler+0xa4/0x130 | el0t_64_sync+0x19c/0x1a0 | irq event stamp: 483 | hardirqs last enabled at (483): [<ffffbd51e636aa24>] _raw_spin_unlock_irqrestore+0xa4/0xb0 | hardirqs last disabled at (482): [<ffffbd51e636acd0>] _raw_spin_lock_irqsave+0xb0/0xb4 | softirqs last enabled at (468): [<ffffbd51e5216f58>] put_cpu_fpsimd_context+0x28/0x70 | softirqs last disabled at (466): [<ffffbd51e5216ed4>] get_cpu_fpsimd_context+0x0/0x5c | ---[ end trace 0000000000000000 ]--- Note that as lockdep_assert_irqs_disabled() uses WARN_ON_ONCE(), and this uses a BRK, the warning is logged with the real PSTATE at the time of the warning, which clearly has DAIF.I set, meaning IRQs (and pseudo-NMIs) were definitely masked and the warning is spurious. Fix this by selecting TRACE_IRQFLAGS_NMI_SUPPORT such that the existing entry tracking takes effect, as we had originally intended when the arm64 entry code was fixed for transitions to/from NMI. Arguably the lockdep_assert_*() functions should have the same NMI checks as the rest of the code to prevent spurious warnings when TRACE_IRQFLAGS_NMI_SUPPORT is not selected, but the real fix for any architecture is to explicitly handle the transitions to/from NMI in the entry code. Fixes: f0cd5ac1 ("arm64: entry: fix NMI {user, kernel}->kernel transitions") Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20220511131733.4074499-3-mark.rutland@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
Mark Rutland authored
On most architectures, IRQ flag tracing is disabled in NMI context, and architectures need to define and select TRACE_IRQFLAGS_NMI_SUPPORT in order to enable this. Commit: 859d069e ("lockdep: Prepare for NMI IRQ state tracking") Permitted IRQ flag tracing in NMI context, allowing lockdep to work in NMI context where an architecture had suitable entry logic. At the time, most architectures did not have such suitable entry logic, and this broke lockdep on such architectures. Thus, this was partially disabled in commit: ed004953 ("locking/lockdep: Fix TRACE_IRQFLAGS vs. NMIs") ... with architectures needing to select TRACE_IRQFLAGS_NMI_SUPPORT to enable IRQ flag tracing in NMI context. Currently TRACE_IRQFLAGS_NMI_SUPPORT is defined under arch/x86/Kconfig.debug. Move it to arch/Kconfig so architectures can select it without having to provide their own definition. Since the regular TRACE_IRQFLAGS_SUPPORT is selected by arch/x86/Kconfig, the select of TRACE_IRQFLAGS_NMI_SUPPORT is moved there too. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20220511131733.4074499-2-mark.rutland@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
Michael Walle authored
The arm64 support of the generic ARM cpuidle driver was removed. This let us remove all support code for it. Signed-off-by: Michael Walle <michael@walle.cc> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Link: https://lore.kernel.org/r/20220529181329.2345722-3-michael@walle.ccSigned-off-by: Will Deacon <will@kernel.org>
-
Michael Walle authored
Since commit 78896146 ("ARM: psci: cpuidle: Enable PSCI CPUidle driver") the generic ARM cpuidle driver doesn't probe anymore because arm_cpuidle_init() will always return -EOPNOTSUPP. That is, because the mentioned commit removes the only .cpu_suspend and .cpu_init_idle provider. Signed-off-by: Michael Walle <michael@walle.cc> Reviewed-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Link: https://lore.kernel.org/r/20220529181329.2345722-2-michael@walle.ccSigned-off-by: Will Deacon <will@kernel.org>
-
- 19 Jun, 2022 7 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull x86 fixes from Thomas Gleixner: - Make RESERVE_BRK() work again with older binutils. The recent 'simplification' broke that. - Make early #VE handling increment RIP when successful. - Make the #VE code consistent vs. the RIP adjustments and add comments. - Handle load_unaligned_zeropad() across page boundaries correctly in #VE when the second page is shared. * tag 'x86-urgent-2022-06-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/tdx: Handle load_unaligned_zeropad() page-cross to a shared page x86/tdx: Clarify RIP adjustments in #VE handler x86/tdx: Fix early #VE handling x86/mm: Fix RESERVE_BRK() for older binutils
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull build tooling updates from Thomas Gleixner: - Remove obsolete CONFIG_X86_SMAP reference from objtool - Fix overlapping text section failures in faddr2line for real - Remove OBJECT_FILES_NON_STANDARD usage from x86 ftrace and replace it with finegrained annotations so objtool can validate that code correctly. * tag 'objtool-urgent-2022-06-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/ftrace: Remove OBJECT_FILES_NON_STANDARD usage faddr2line: Fix overlapping text section failures, the sequel objtool: Fix obsolete reference to CONFIG_X86_SMAP
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull scheduler fix from Thomas Gleixner: "A single scheduler fix plugging a race between sched_setscheduler() and balance_push(). sched_setscheduler() spliced the balance callbacks accross a lock break which makes it possible for an interleaving schedule() to observe an empty list" * tag 'sched-urgent-2022-06-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched: Fix balance_push() vs __sched_setscheduler()
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull lockdep fix from Thomas Gleixner: "A RT fix for lockdep. lockdep invokes prandom_u32() to create cookies. This worked until prandom_u32() was switched to the real random generator, which takes a spinlock for extraction, which does not work on RT when invoked from atomic contexts. lockdep has no requirement for real random numbers and it turns out sched_clock() is good enough to create the cookie. That works everywhere and is faster" * tag 'locking-urgent-2022-06-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: locking/lockdep: Use sched_clock() for random numbers
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull irq fixes from Thomas Gleixner: "A set of interrupt subsystem updates: Core: - Ensure runtime power management for chained interrupts Drivers: - A collection of OF node refcount fixes - Unbreak MIPS uniprocessor builds - Fix xilinx interrupt controller Kconfig dependencies - Add a missing compatible string to the Uniphier driver" * tag 'irq-urgent-2022-06-19' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: irqchip/loongson-liointc: Use architecture register to get coreid irqchip/uniphier-aidet: Add compatible string for NX1 SoC dt-bindings: interrupt-controller/uniphier-aidet: Add bindings for NX1 SoC irqchip/realtek-rtl: Fix refcount leak in map_interrupts irqchip/gic-v3: Fix refcount leak in gic_populate_ppi_partitions irqchip/gic-v3: Fix error handling in gic_populate_ppi_partitions irqchip/apple-aic: Fix refcount leak in aic_of_ic_init irqchip/apple-aic: Fix refcount leak in build_fiq_affinity irqchip/gic/realview: Fix refcount leak in realview_gic_of_init irqchip/xilinx: Remove microblaze+zynq dependency genirq: PM: Use runtime PM for chained interrupts
-
Linus Torvalds authored
Merge tag 'char-misc-5.19-rc3-take2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc Pull char/misc driver fixes for real from Greg KH: "Let's tag the proper branch this time... Here are some small char/misc driver fixes for 5.19-rc3 that resolve some reported issues. They include: - mei driver fixes - comedi driver fix - rtsx build warning fix - fsl-mc-bus driver fix All of these have been in linux-next for a while with no reported issues" This is what the merge in commit f0ec9c65 _should_ have merged, but Greg fat-fingered the pull request and I got some small changes from linux-next instead there. Credit to Nathan Chancellor for eagle-eyes. Link: https://lore.kernel.org/all/Yqywy+Md2AfGDu8v@dev-arch.thelio-3990X/ * tag 'char-misc-5.19-rc3-take2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: bus: fsl-mc-bus: fix KASAN use-after-free in fsl_mc_bus_remove() mei: me: add raptor lake point S DID mei: hbm: drop capability response on early shutdown mei: me: set internal pg flag to off on hardware reset misc: rtsx: Fix clang -Wsometimes-uninitialized in rts5261_init_from_hw() comedi: vmk80xx: fix expression for tx buffer size
-