- 24 Jun, 2021 4 commits
-
-
Will Deacon authored
Big cleanup of our cache maintenance routines, which were confusingly named and inconsistent in their implementations. * for-next/caches: arm64: Rename arm64-internal cache maintenance functions arm64: Fix cache maintenance function comments arm64: sync_icache_aliases to take end parameter instead of size arm64: __clean_dcache_area_pou to take end parameter instead of size arm64: __clean_dcache_area_pop to take end parameter instead of size arm64: __clean_dcache_area_poc to take end parameter instead of size arm64: __flush_dcache_area to take end parameter instead of size arm64: dcache_by_line_op to take end parameter instead of size arm64: __inval_dcache_area to take end parameter instead of size arm64: Fix comments to refer to correct function __flush_icache_range arm64: Move documentation of dcache_by_line_op arm64: assembler: remove user_alt arm64: Downgrade flush_icache_range to invalidate arm64: Do not enable uaccess for invalidate_icache_range arm64: Do not enable uaccess for flush_icache_range arm64: Apply errata to swsusp_arch_suspend_exit arm64: assembler: add conditional cache fixups arm64: assembler: replace `kaddr` with `addr`
-
Will Deacon authored
Tweak linker flags so that GDB can understand vmlinux when using RELR relocations. * for-next/build: Makefile: fix GDB warning with CONFIG_RELR
-
Will Deacon authored
Boot path cleanups to enable early initialisation of per-cpu operations needed by KCSAN. * for-next/boot: arm64: scs: Drop unused 'tmp' argument to scs_{load, save} asm macros arm64: smp: initialize cpu offset earlier arm64: smp: unify task and sp setup arm64: smp: remove stack from secondary_data arm64: smp: remove pointless secondary_data maintenance arm64: assembler: add set_this_cpu_offset
-
Will Deacon authored
Relax frame record alignment requirements to facilitate 8-byte alignment with KASAN and Clang. * for-next/stacktrace: arm64: stacktrace: Relax frame record alignment requirement to 8 bytes arm64: Change the on_*stack functions to take a size argument arm64: Implement stack trace termination record
-
- 08 Jun, 2021 1 commit
-
-
Nick Desaulniers authored
GDB produces the following warning when debugging kernels built with CONFIG_RELR: BFD: /android0/linux-next/vmlinux: unknown type [0x13] section `.relr.dyn' when loading a kernel built with CONFIG_RELR into GDB. It can also prevent debugging symbols using such relocations. Peter sugguests: [That flag] means that lld will use dynamic tags and section type numbers in the OS-specific range rather than the generic range. The kernel itself doesn't care about these numbers; it determines the location of the RELR section using symbols defined by a linker script. Link: https://github.com/ClangBuiltLinux/linux/issues/1057Suggested-by:
Peter Collingbourne <pcc@google.com> Reviewed-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20210522012626.2811297-1-ndesaulniers@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
- 27 May, 2021 1 commit
-
-
Will Deacon authored
The scs_load and scs_save asm macros don't make use of the mandatory 'tmp' register argument, so drop it and fix up the callers. Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Mark Rutland <mark.rutland@arm.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Reviewed-by:
Sami Tolvanen <samitolvanen@google.com> Link: https://lore.kernel.org/r/20210527105529.21967-1-will@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
- 26 May, 2021 8 commits
-
-
Mark Rutland authored
Now that we have a consistent place to initialize CPU context registers early in the boot path, let's also initialize the per-cpu offset here. This makes the primary and secondary boot paths more consistent, and allows for the use of per-cpu operations earlier, which will be necessary for instrumentation with KCSAN. Note that smp_prepare_boot_cpu() still needs to re-initialize CPU0's offset as immediately prior to this the per-cpu areas may be reallocated, and hence the boot-time offset may be stale. A comment is added to make this clear. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210520115031.18509-7-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
Once we enable the MMU, we have to initialize: * SP_EL0 to point at the active task * SP to point at the active task's stack * SCS_SP to point at the active task's shadow stack For all tasks (including init_task), this information can be derived from the task's task_struct. Let's unify __primary_switched and __secondary_switched to consistently acquire this information from the relevant task_struct. At the same time, let's fold this together with initializing a task's final frame. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210520115031.18509-6-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
When we boot a secondary CPU, we pass it a task and a stack to use. As the stack is always the task's stack, which can be derived from the task, let's have the secondary CPU derive this itself and avoid passing redundant information. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210520115031.18509-5-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
All reads and writes of secondary_data occur with the MMU on, using coherent attributes, so there's no need to perform any cache maintenance for this. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210520115031.18509-4-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210520115031.18509-3-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Will Deacon authored
Merge in stack unwinding work to minimise conflicts in head.S. * for-next/stacktrace: arm64: stacktrace: Relax frame record alignment requirement to 8 bytes arm64: Change the on_*stack functions to take a size argument arm64: Implement stack trace termination record
-
Peter Collingbourne authored
The AAPCS places no requirements on the alignment of the frame record. In theory it could be placed anywhere, although it seems sensible to require it to be aligned to 8 bytes. With an upcoming enhancement to tag-based KASAN Clang will begin creating frame records located at an address that is only aligned to 8 bytes. Accommodate such frame records in the stack unwinding code. As pointed out by Mark Rutland, the userspace stack unwinding code has the same problem, so fix it there as well. Signed-off-by:
Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/Ia22c375230e67ca055e9e4bb639383567f7ad268Acked-by:
Andrey Konovalov <andreyknvl@gmail.com> Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Tested-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210526174927.2477847-2-pcc@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Peter Collingbourne authored
unwind_frame() was previously implicitly checking that the frame record is in bounds of the stack by enforcing that FP is both aligned to 16 and in bounds of the stack. Once the FP alignment requirement is relaxed to 8 this will not be sufficient because it does not account for the case where FP points to 8 bytes before the end of the stack. Make the check explicit by changing the on_*stack functions to take a size argument and adjusting the callers to pass the appropriate sizes. Signed-off-by:
Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/Ib7a3eb3eea41b0687ffaba045ceb2012d077d8b4Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Tested-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210526174927.2477847-1-pcc@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
- 25 May, 2021 19 commits
-
-
Fuad Tabba authored
Although naming across the codebase isn't that consistent, it tends to follow certain patterns. Moreover, the term "flush" isn't defined in the Arm Architecture reference manual, and might be interpreted to mean clean, invalidate, or both for a cache. Rename arm64-internal functions to make the naming internally consistent, as well as making it consistent with the Arm ARM, by specifying whether it applies to the instruction, data, or both caches, whether the operation is a clean, invalidate, or both. Also specify which point the operation applies to, i.e., to the point of unification (PoU), coherency (PoC), or persistence (PoP). This commit applies the following sed transformation to all files under arch/arm64: "s/\b__flush_cache_range\b/caches_clean_inval_pou_macro/g;"\ "s/\b__flush_icache_range\b/caches_clean_inval_pou/g;"\ "s/\binvalidate_icache_range\b/icache_inval_pou/g;"\ "s/\b__flush_dcache_area\b/dcache_clean_inval_poc/g;"\ "s/\b__inval_dcache_area\b/dcache_inval_poc/g;"\ "s/__clean_dcache_area_poc\b/dcache_clean_poc/g;"\ "s/\b__clean_dcache_area_pop\b/dcache_clean_pop/g;"\ "s/\b__clean_dcache_area_pou\b/dcache_clean_pou/g;"\ "s/\b__flush_cache_user_range\b/caches_clean_inval_user_pou/g;"\ "s/\b__flush_icache_all\b/icache_inval_all_pou/g;" Note that __clean_dcache_area_poc is deliberately missing a word boundary check at the beginning in order to match the efistub symbols in image-vars.h. Also note that, despite its name, __flush_icache_range operates on both instruction and data caches. The name change here reflects that. No functional change intended. Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Fuad Tabba <tabba@google.com> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-19-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Fuad Tabba authored
Fix and expand comments for the cache maintenance functions in cacheflush.h. Adds comments to functions that weren't described before. Explains what the functions do using Arm Architecture Reference Manual terminology. No functional change intended. Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Fuad Tabba <tabba@google.com> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-18-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Fuad Tabba authored
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to specify the range in terms of start and end, as opposed to start and size. No functional change intended. Reported-by:
Will Deacon <will@kernel.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Fuad Tabba <tabba@google.com> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-17-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Fuad Tabba authored
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to specify the range in terms of start and end, as opposed to start and size. No functional change intended. Reported-by:
Will Deacon <will@kernel.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Fuad Tabba <tabba@google.com> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-16-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Fuad Tabba authored
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to specify the range in terms of start and end, as opposed to start and size. No functional change intended. Reported-by:
Will Deacon <will@kernel.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Fuad Tabba <tabba@google.com> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-15-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Fuad Tabba authored
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to specify the range in terms of start and end, as opposed to start and size. Because the code is shared with __dma_clean_area, it changes the parameters for that as well. However, __dma_clean_area is local to cache.S, so no other users are affected. No functional change intended. Reported-by:
Will Deacon <will@kernel.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Fuad Tabba <tabba@google.com> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-14-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Fuad Tabba authored
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to specify the range in terms of start and end, as opposed to start and size. No functional change intended. Reported-by:
Will Deacon <will@kernel.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Fuad Tabba <tabba@google.com> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-13-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Fuad Tabba authored
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to specify the range in terms of start and end, as opposed to start and size. No functional change intended. Reported-by:
Will Deacon <will@kernel.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Fuad Tabba <tabba@google.com> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-12-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Fuad Tabba authored
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to specify the range in terms of start and end, as opposed to start and size. Because the code is shared with __dma_inv_area, it changes the parameters for that as well. However, __dma_inv_area is local to cache.S, so no other users are affected. No functional change intended. Reported-by:
Will Deacon <will@kernel.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Fuad Tabba <tabba@google.com> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-11-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Fuad Tabba authored
Many comments refer to the function flush_icache_range, where the intent is in fact __flush_icache_range. Fix these comments to refer to the intended function. That's probably due to commit 3b8c9f1c ("arm64: IPI each CPU after invalidating the I-cache for kernel mappings"), which renamed flush_icache_range() to __flush_icache_range() and added a wrapper. No functional change intended. Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Fuad Tabba <tabba@google.com> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-10-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Fuad Tabba authored
The comment describing the macro dcache_by_line_op is placed right before the previous macro of the one it describes, which is a bit confusing. Move it to the macro it describes (dcache_by_line_op). No functional change intended. Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Fuad Tabba <tabba@google.com> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-9-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Fuad Tabba authored
user_alt isn't being used anymore. It's also simpler and clearer to directly use alternative_insn and _cond_extable in-line when needed. Reported-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/linux-arm-kernel/20210520125735.GF17233@C02TD0UTHF1T.local/Signed-off-by:
Fuad Tabba <tabba@google.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-8-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Fuad Tabba authored
Since __flush_dcache_area is called right before, invalidate_icache_range is sufficient in this case. Rewrite the comment to better explain the rationale behind the cache maintenance operations used here. No functional change intended. Possible performance impact due to invalidating only the icache rather than invalidating and cleaning both caches. Reported-by:
Catalin Marinas <catalin.marinas@arm.com> Reported-by:
Will Deacon <will@kernel.org> Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Fuad Tabba <tabba@google.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-7-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Fuad Tabba authored
invalidate_icache_range() works on kernel addresses, and doesn't need uaccess. Remove the code that toggles uaccess_ttbr0_enable, as well as the code that emits an entry into the exception table (via the macro invalidate_icache_by_line). Changes return type of invalidate_icache_range() from int (which used to indicate a fault) to void, since it doesn't need uaccess and won't fault. Note that return value was never checked by any of the callers. No functional change intended. Possible performance impact due to the reduced number of instructions. Reported-by:
Catalin Marinas <catalin.marinas@arm.com> Reported-by:
Will Deacon <will@kernel.org> Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Fuad Tabba <tabba@google.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-6-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Fuad Tabba authored
__flush_icache_range works on kernel addresses, and doesn't need uaccess. The existing code is a side-effect of its current implementation with __flush_cache_user_range fallthrough. Instead of fallthrough to share the code, use a common macro for the two where the caller specifies an optional fixup label if user access is needed. If provided, this label would be used to generate an extable entry. Simplify the code to use dcache_by_line_op, instead of replicating much of its functionality. No functional change intended. Possible performance impact due to the reduced number of instructions. Reported-by:
Catalin Marinas <catalin.marinas@arm.com> Reported-by:
Will Deacon <will@kernel.org> Reported-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/ Link: https://lore.kernel.org/linux-arm-kernel/20210521121846.GB1040@C02TD0UTHF1T.local/Signed-off-by:
Fuad Tabba <tabba@google.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-5-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Fuad Tabba authored
The Arm errata covered by ARM64_WORKAROUND_CLEAN_CACHE require that "dc cvau" instructions get promoted to "dc civac". Reported-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Fuad Tabba <tabba@google.com> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-4-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
It would be helpful if we could use both `dcache_by_line_op` and `invalidate_icache_by_line` for user memory without accidentally fixing up unexpected faults when performing maintenance on kernel addresses. Let's make this possible by having both macros take an optional fixup label, and only generating an extable entry if a label is provided. At the same time, let's clean up the labels used to be globally unique using \@ as we do for other macros. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Fuad Tabba <tabba@google.com> Cc: Ard Biesheuvel <aedb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Fuad Tabba <tabba@google.com> Cc: Will Deacon <will@kernel.org> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-3-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
The `__dcache_op_workaround_clean_cache` and `dcache_by_line_op` macros are only expected to be usedc on kernel memory, without a user fault fixup, and so we named their address variables `kaddr` to make this clear. Subseuqent patches will modify these to also work on user memory with an (optional) user fault fixup, where `kaddr` won't make as much sense. To aid the legibility of patches, this patch (only) replaces `kaddr` with `addr` as a preparatory step. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Fuad Tabba <tabba@google.com> Cc: Ard Biesheuvel <aedb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Fuad Tabba <tabba@google.com> Cc: Will Deacon <will@kernel.org> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-2-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Madhavan T. Venkataraman authored
Reliable stacktracing requires that we identify when a stacktrace is terminated early. We can do this by ensuring all tasks have a final frame record at a known location on their task stack, and checking that this is the final frame record in the chain. We'd like to use task_pt_regs(task)->stackframe as the final frame record, as this is already setup upon exception entry from EL0. For kernel tasks we need to consistently reserve the pt_regs and point x29 at this, which we can do with small changes to __primary_switched, __secondary_switched, and copy_process(). Since the final frame record must be at a specific location, we must create the final frame record in __primary_switched and __secondary_switched rather than leaving this to start_kernel and secondary_start_kernel. Thus, __primary_switched and __secondary_switched will now show up in stacktraces for the idle tasks. Since the final frame record is now identified by its location rather than by its contents, we identify it at the start of unwind_frame(), before we read any values from it. External debuggers may terminate the stack trace when FP == 0. In the pt_regs->stackframe, the PC is 0 as well. So, stack traces taken in the debugger may print an extra record 0x0 at the end. While this is not pretty, this does not do any harm. This is a small price to pay for having reliable stack trace termination in the kernel. That said, gdb does not show the extra record probably because it uses DWARF and not frame pointers for stack traces. Signed-off-by:
Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Reviewed-by:
Mark Brown <broonie@kernel.org> [Mark: rebase, use ASM_BUG(), update comments, update commit message] Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210510110026.18061-1-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
- 23 May, 2021 7 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull perf fixes from Thomas Gleixner: "Two perf fixes: - Do not check the LBR_TOS MSR when setting up unrelated LBR MSRs as this can cause malfunction when TOS is not supported - Allocate the LBR XSAVE buffers along with the DS buffers upfront because allocating them when adding an event can deadlock" * tag 'perf-urgent-2021-05-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/x86/lbr: Remove cpuc->lbr_xsave allocation from atomic context perf/x86: Avoid touching LBR_TOS MSR for Arch LBR
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull locking fixes from Thomas Gleixner: "Two locking fixes: - Invoke the lockdep tracepoints in the correct place so the ordering is correct again - Don't leave the mutex WAITER bit stale when the last waiter is dropping out early due to a signal as that forces all subsequent lock operations needlessly into the slowpath until it's cleaned up again" * tag 'locking-urgent-2021-05-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: locking/mutex: clear MUTEX_FLAGS if wait_list is empty due to signal locking/lockdep: Correct calling tracepoints
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull irq fixes from Thomas Gleixner: "A few fixes for irqchip drivers: - Allocate interrupt descriptors correctly on Mainstone PXA when SPARSE_IRQ is enabled; otherwise the interrupt association fails - Make the APPLE AIC chip driver depend on APPLE - Remove redundant error output on devm_ioremap_resource() failure" * tag 'irq-urgent-2021-05-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: irqchip: Remove redundant error printing irqchip/apple-aic: APPLE_AIC should depend on ARCH_APPLE ARM: PXA: Fix cplds irqdesc allocation when using legacy mode
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull x86 fixes from Borislav Petkov: - Fix how SEV handles MMIO accesses by forwarding potential page faults instead of killing the machine and by using the accessors with the exact functionality needed when accessing memory. - Fix a confusion with Clang LTO compiler switches passed to the it - Handle the case gracefully when VMGEXIT has been executed in userspace * tag 'x86_urgent_for_v5.13_rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/sev-es: Use __put_user()/__get_user() for data accesses x86/sev-es: Forward page-faults which happen during emulation x86/sev-es: Don't return NULL from sev_es_get_ghcb() x86/build: Fix location of '-plugin-opt=' flags x86/sev-es: Invalidate the GHCB after completing VMGEXIT x86/sev-es: Move sev_es_put_ghcb() in prep for follow on patch
-
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linuxLinus Torvalds authored
Pull powerpc fixes from Michael Ellerman: - Fix breakage of strace (and other ptracers etc.) when using the new scv ABI (Power9 or later with glibc >= 2.33). - Fix early_ioremap() on 64-bit, which broke booting on some machines. Thanks to Dmitry V. Levin, Nicholas Piggin, Alexey Kardashevskiy, and Christophe Leroy. * tag 'powerpc-5.13-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/64s/syscall: Fix ptrace syscall info with scv syscalls powerpc/64s/syscall: Use pt_regs.trap to distinguish syscall ABI difference between sc and scv syscalls powerpc: Fix early setup to make early_ioremap() work
-
Linus Torvalds authored
Merge tag 'kbuild-fixes-v5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild Pull Kbuild fixes from Masahiro Yamada: - Fix short log indentation for tools builds - Fix dummy-tools to adjust to the latest stackprotector check * tag 'kbuild-fixes-v5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: kbuild: dummy-tools: adjust to stricter stackprotector check scripts/jobserver-exec: Fix a typo ("envirnoment") tools build: Fix quiet cmd indentation
-