- 20 Apr, 2023 5 commits
-
-
Will Deacon authored
* for-next/kdump: arm64: kdump: defer the crashkernel reservation for platforms with no DMA memory zones arm64: kdump: do not map crashkernel region specifically arm64: kdump : take off the protection on crashkernel memory region
-
Will Deacon authored
* for-next/ftrace: arm64: ftrace: Simplify get_ftrace_plt arm64: ftrace: Add direct call support ftrace: selftest: remove broken trace_direct_tramp ftrace: Make DIRECT_CALLS work WITH_ARGS and !WITH_REGS ftrace: Store direct called addresses in their ops ftrace: Rename _ftrace_direct_multi APIs to _ftrace_direct APIs ftrace: Remove the legacy _ftrace_direct API ftrace: Replace uses of _ftrace_direct APIs with _ftrace_direct_multi ftrace: Let unregister_ftrace_direct_multi() call ftrace_free_filter()
-
Will Deacon authored
* for-next/cpufeature: arm64/cpufeature: Use helper macro to specify ID register for capabilites arm64/cpufeature: Consistently use symbolic constants for min_field_value arm64/cpufeature: Pull out helper for CPUID register definitions
-
Will Deacon authored
* for-next/asm: arm64: uaccess: remove unnecessary earlyclobber arm64: uaccess: permit put_{user,kernel} to use zero register arm64: uaccess: permit __smp_store_release() to use zero register arm64: atomics: lse: improve cmpxchg implementation
-
Will Deacon authored
* for-next/acpi: ACPI: AGDI: Improve error reporting for problems during .remove()
-
- 17 Apr, 2023 4 commits
-
-
Mark Brown authored
When defining which value to look for in a system register field we currently manually specify the register, field shift, width and sign and the value to look for. This opens the potential for error with for example the wrong field width or sign being specified, an enumeration value for a different similarly named field or letting something be initialised to 0. Since we now generate defines for all the ID registers we now have named constants for all of these things generated from the system register description, meaning that we can generate initialisation for all the fields used in matching from a minimal specification of register, field and match value. This is both shorter and eliminates or makes build failures several potential errors. No change in the generated binary. Signed-off-by:
Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230303-arm64-cpufeature-helpers-v2-3-4c8f28a6f203@kernel.org [will: Drop explicit '.sign' assignment for BTI feature] Signed-off-by:
Will Deacon <will@kernel.org>
-
Mark Brown authored
A number of the cpufeatures use raw numbers for the minimum field values specified rather than symbolic constants. In preparation for the use of helper macros replace all these with the appropriate constants. No change in the generated binary. Signed-off-by:
Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230303-arm64-cpufeature-helpers-v2-2-4c8f28a6f203@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Brown authored
We use the same structure to match hwcaps and CPU features so we can use the same helper to generate the fields required. Pull the portion of the current hwcaps helper that initialises the fields out into a separate define placed earlier in the file so we can use it for cpufeatures. No functional change. Signed-off-by:
Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230303-arm64-cpufeature-helpers-v2-1-4c8f28a6f203@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
Uwe Kleine-König authored
Returning an error value in a platform driver's remove callback results in a generic error message being emitted by the driver core, but otherwise it doesn't make a difference. The device goes away anyhow. So instead of triggering the generic platform error message, emit a more helpful message if a problem occurs and return 0 to suppress the generic message. This patch is a preparation for making platform remove callbacks return void. Signed-off-by:
Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Reviewed-by:
Lorenzo Pieralisi <lpieralisi@kernel.org> Link: https://lore.kernel.org/r/20221014160623.467195-1-u.kleine-koenig@pengutronix.deSigned-off-by:
Will Deacon <will@kernel.org>
-
- 11 Apr, 2023 6 commits
-
-
Baoquan He authored
In commit 03149563 ("arm64: Do not defer reserve_crashkernel() for platforms with no DMA memory zones"), reserve_crashkernel() is called much earlier in arm64_memblock_init() to avoid causing base apge mapping on platforms with no DMA meomry zones. With taking off protection on crashkernel memory region, no need to call reserve_crashkernel() specially in advance. The deferred invocation of reserve_crashkernel() in bootmem_init() can cover all cases. So revert the whole commit now. Signed-off-by:
Baoquan He <bhe@redhat.com> Reviewed-by:
Zhen Lei <thunder.leizhen@huawei.com> Link: https://lore.kernel.org/r/20230407011507.17572-4-bhe@redhat.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Baoquan He authored
After taking off the protection functions on crashkernel memory region, there's no need to map crashkernel region with page granularity during linear mapping. With this change, the system can make use of block or section mapping on linear region to largely improve perforcemence during system bootup and running. Signed-off-by:
Baoquan He <bhe@redhat.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Mike Rapoport (IBM) <rppt@kernel.org> Reviewed-by:
Zhen Lei <thunder.leizhen@huawei.com> Link: https://lore.kernel.org/r/20230407011507.17572-3-bhe@redhat.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Baoquan He authored
Problem: ======= On arm64, block and section mapping is supported to build page tables. However, currently it enforces to take base page mapping for the whole linear mapping if CONFIG_ZONE_DMA or CONFIG_ZONE_DMA32 is enabled and crashkernel kernel parameter is set. This will cause longer time of the linear mapping process during bootup and severe performance degradation during running time. Root cause: ========== On arm64, crashkernel reservation relies on knowing the upper limit of low memory zone because it needs to reserve memory in the zone so that devices' DMA addressing in kdump kernel can be satisfied. However, the upper limit of low memory on arm64 is variant. And the upper limit can only be decided late till bootmem_init() is called [1]. And we need to map the crashkernel region with base page granularity when doing linear mapping, because kdump needs to protect the crashkernel region via set_memory_valid(,0) after kdump kernel loading. However, arm64 doesn't support well on splitting the built block or section mapping due to some cpu reststriction [2]. And unfortunately, the linear mapping is done before bootmem_init(). To resolve the above conflict on arm64, the compromise is enforcing to take base page mapping for the entire linear mapping if crashkernel is set, and CONFIG_ZONE_DMA or CONFIG_ZONE_DMA32 is enabed. Hence performance is sacrificed. Solution: ========= Comparing with the base page mapping for the whole linear region, it's better to take off the protection on crashkernel memory region for the time being because the anticipated stamping on crashkernel memory region could only happen in a chance in one million, while the base page mapping for the whole linear region is mitigating arm64 systems with crashkernel set always. [1] https://lore.kernel.org/all/YrIIJkhKWSuAqkCx@arm.com/T/#u [2] https://lore.kernel.org/linux-arm-kernel/20190911182546.17094-1-nsaenzjulienne@suse.de/T/Signed-off-by:
Baoquan He <bhe@redhat.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Mike Rapoport (IBM) <rppt@kernel.org> Reviewed-by:
Zhen Lei <thunder.leizhen@huawei.com> Link: https://lore.kernel.org/r/20230407011507.17572-2-bhe@redhat.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Florent Revest authored
Following recent refactorings, the get_ftrace_plt function only ever gets called with addr = FTRACE_ADDR so its code can be simplified to always return the ftrace trampoline plt. Signed-off-by:
Florent Revest <revest@chromium.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20230405180250.2046566-3-revest@chromium.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
Florent Revest authored
This builds up on the CALL_OPS work which extends the ftrace patchsite on arm64 with an ops pointer usable by the ftrace trampoline. This ops pointer is valid at all time. Indeed, it is either pointing to ftrace_list_ops or to the single ops which should be called from that patchsite. There are a few cases to distinguish: - If a direct call ops is the only one tracing a function: - If the direct called trampoline is within the reach of a BL instruction -> the ftrace patchsite jumps to the trampoline - Else -> the ftrace patchsite jumps to the ftrace_caller trampoline which reads the ops pointer in the patchsite and jumps to the direct call address stored in the ops - Else -> the ftrace patchsite jumps to the ftrace_caller trampoline and its ops literal points to ftrace_list_ops so it iterates over all registered ftrace ops, including the direct call ops and calls its call_direct_funcs handler which stores the direct called trampoline's address in the ftrace_regs and the ftrace_caller trampoline will return to that address instead of returning to the traced function Signed-off-by:
Florent Revest <revest@chromium.org> Co-developed-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20230405180250.2046566-2-revest@chromium.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
Will Deacon authored
Merge tag 'trace-direct-v6.3-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace into for-next/ftrace Pull in ftrace trampoline updates from Steve so that we can implement support for direct calls for arm64 on top: tracing: Direct trampoline updates Updates to the direct trampoline to allow ARM64 to have direct trampolines.
-
- 28 Mar, 2023 4 commits
-
-
Mark Rutland authored
Currently the asm constraints for __get_mem_asm() mark the value register as an earlyclobber operand. This means that the compiler can't reuse the same register for both the address and value, even when the value is not subsequently used. There's no need for the value register to be marked as earlyclobber, as it's only written to after the address register is consumed, even when the access faults. Remove the unnecessary earlyclobber. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230314153700.787701-5-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
Currently the asm constraints for __put_mem_asm() require that the value is placed in a "real" GPR (i.e. one other than [XW]ZR or SP). This means that for cases such as: __put_user(0, addr) ... the compiler has to move '0' into "real" GPR, e.g. mov xN, #0 sttr xN, [<addr>] This is unfortunate, as using the zero register would require fewer instructions and save a "real" GPR for other usage, allowing the compiler to generate: sttr xzr, [<addr>] Modify the asm constaints for __put_mem_asm() to permit the use of the zero register for the value. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230314153700.787701-4-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
Currently the asm constraints for __smp_store_release() require that the value is placed in a "real" GPR (i.e. one other than [XW]ZR or SP). This means that for cases such as: __smp_store_release(ptr, 0) ... the compiler has to move '0' into "real" GPR, e.g. mov xN, #0 stlr xN, [<addr>] This is unfortunate, as using the zero register would require fewer instructions and save a "real" GPR for other usage, allowing the compiler to generate: stlr xzr, [<addr>] Modify the asm constaints for __smp_store_release() to permit the use of the zero register for the value. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230314153700.787701-3-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
For historical reasons, the LSE implementation of cmpxchg*() hard-codes the GPRs to use, and shuffles registers around with MOVs. This is no longer necessary, and can be simplified. When the LSE cmpxchg implementation was added in commit: c342f782 ("arm64: cmpxchg: patch in lse instructions when supported by the CPU") ... the LL/SC implementation of cmpxchg() would be placed out-of-line, and the in-line assembly for cmpxchg would default to: NOP BL <ll_sc_cmpxchg*_implementation> NOP The LL/SC implementation of each cmpxchg() function accepted arguments as per AAPCS64 rules, to it was necessary to place the pointer in x0, the older value in X1, and the new value in x2, and acquire the return value from x0. The LL/SC implementation required a temporary register (e.g. for the STXR status value). As the LL/SC implementation preserved the old value, the LSE implementation does likewise. Since commit: addfc386 ("arm64: atomics: avoid out-of-line ll/sc atomics") ... the LSE and LL/SC implementations of cmpxchg are inlined as separate asm blocks, with another branch choosing between thw two. Due to this, it is no longer necessary for the LSE implementation to match the register constraints of the LL/SC implementation. This was partially dealt with by removing the hard-coded use of x30 in commit: 3337cb5a ("arm64: avoid using hard-coded registers for LSE atomics") ... but we didn't clean up the hard-coding of x0, x1, and x2. This patch simplifies the LSE implementation of cmpxchg, removing the register shuffling and directly clobbering the 'old' argument. This gives the compiler greater freedom for register allocation, and avoids redundant work. The new constraints permit 'old' (Rs) and 'new' (Rt) to be allocated to the same register when the initial values of the two are the same, e.g. resulting in: CAS X0, X0, [X1] This is safe as Rs is only written back after the initial values of Rs and Rt are consumed, and there are no UNPREDICTABLE behaviours to avoid when Rs == Rt. The new constraints also permit 'new' to be allocated to the zero register, avoiding a MOV in a few cases. The same cannot be done for 'old' as it is both an input and output, and any caller of cmpxchg() should care about the output value. Note that for CAS* the use of the zero register never affects the ordering (while for SWP* the use of the zero regsiter for the 'old' value drops any ACQUIRE semantic). Compared to v6.2-rc4, a defconfig vmlinux is ~116KiB smaller, though the resulting Image is the same size due to internal alignment and padding: [mark@lakrids:~/src/linux]% ls -al vmlinux-* -rwxr-xr-x 1 mark mark 137269304 Jan 16 11:59 vmlinux-after -rwxr-xr-x 1 mark mark 137387936 Jan 16 10:54 vmlinux-before [mark@lakrids:~/src/linux]% ls -al Image-* -rw-r--r-- 1 mark mark 38711808 Jan 16 11:59 Image-after -rw-r--r-- 1 mark mark 38711808 Jan 16 10:54 Image-before This patch does not touch cmpxchg_double*() as that requires contiguous register pairs, and separate patches will replace it with cmpxchg128*(). There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230314153700.787701-2-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
- 21 Mar, 2023 7 commits
-
-
Mark Rutland authored
The ftrace selftest code has a trace_direct_tramp() function which it uses as a direct call trampoline. This happens to work on x86, since the direct call's return address is in the usual place, and can be returned to via a RET, but in general the calling convention for direct calls is different from regular function calls, and requires a trampoline written in assembly. On s390, regular function calls place the return address in %r14, and an ftrace patch-site in an instrumented function places the trampoline's return address (which is within the instrumented function) in %r0, preserving the original %r14 value in-place. As a regular C function will return to the address in %r14, using a C function as the trampoline results in the trampoline returning to the caller of the instrumented function, skipping the body of the instrumented function. Note that the s390 issue is not detcted by the ftrace selftest code, as the instrumented function is trivial, and returning back into the caller happens to be equivalent. On arm64, regular function calls place the return address in x30, and an ftrace patch-site in an instrumented function saves this into r9 and places the trampoline's return address (within the instrumented function) in x30. A regular C function will return to the address in x30, but will not restore x9 into x30. Consequently, using a C function as the trampoline results in returning to the trampoline's return address having corrupted x30, such that when the instrumented function returns, it will return back into itself. To avoid future issues in this area, remove the trace_direct_tramp() function, and require that each architecture with direct calls provides a stub trampoline, named ftrace_stub_direct_tramp. This can be written to handle the architecture's trampoline calling convention, and in future could be used elsewhere (e.g. in the ftrace ops sample, to measure the overhead of direct calls), so we may as well always build it in. Link: https://lkml.kernel.org/r/20230321140424.345218-8-revest@chromium.orgSigned-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Li Huafei <lihuafei1@huawei.com> Cc: Xu Kuohai <xukuohai@huawei.com> Signed-off-by:
Florent Revest <revest@chromium.org> Acked-by:
Jiri Olsa <jolsa@kernel.org> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
Florent Revest authored
Direct called trampolines can be called in two ways: - either from the ftrace callsite. In this case, they do not access any struct ftrace_regs nor pt_regs - Or, if a ftrace ops is also attached, from the end of a ftrace trampoline. In this case, the call_direct_funcs ops is in charge of setting the direct call trampoline's address in a struct ftrace_regs Since: commit 9705bc70 ("ftrace: pass fregs to arch_ftrace_set_direct_caller()") The later case no longer requires a full pt_regs. It only needs a struct ftrace_regs so DIRECT_CALLS can work with both WITH_ARGS or WITH_REGS. With architectures like arm64 already abandoning WITH_REGS in favor of WITH_ARGS, it's important to have DIRECT_CALLS work WITH_ARGS only. Link: https://lkml.kernel.org/r/20230321140424.345218-7-revest@chromium.orgSigned-off-by:
Florent Revest <revest@chromium.org> Co-developed-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Jiri Olsa <jolsa@kernel.org> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
Florent Revest authored
All direct calls are now registered using the register_ftrace_direct API so each ops can jump to only one direct-called trampoline. By storing the direct called trampoline address directly in the ops we can save one hashmap lookup in the direct call ops and implement arm64 direct calls on top of call ops. Link: https://lkml.kernel.org/r/20230321140424.345218-6-revest@chromium.orgSigned-off-by:
Florent Revest <revest@chromium.org> Acked-by:
Jiri Olsa <jolsa@kernel.org> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
Florent Revest authored
Now that the original _ftrace_direct APIs are gone, the "_multi" suffixes only add confusion. Link: https://lkml.kernel.org/r/20230321140424.345218-5-revest@chromium.orgSigned-off-by:
Florent Revest <revest@chromium.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Tested-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Jiri Olsa <jolsa@kernel.org> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
Florent Revest authored
This API relies on a single global ops, used for all direct calls registered with it. However, to implement arm64 direct calls, we need each ops to point to a single direct call trampoline. Link: https://lkml.kernel.org/r/20230321140424.345218-4-revest@chromium.orgSigned-off-by:
Florent Revest <revest@chromium.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Tested-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Jiri Olsa <jolsa@kernel.org> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
Florent Revest authored
The _multi API requires that users keep their own ops but can enforce that an op is only associated to one direct call. Link: https://lkml.kernel.org/r/20230321140424.345218-3-revest@chromium.orgSigned-off-by:
Florent Revest <revest@chromium.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Tested-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Jiri Olsa <jolsa@kernel.org> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
Florent Revest authored
A common pattern when using the ftrace_direct_multi API is to unregister the ops and also immediately free its filter. We've noticed it's very easy for users to miss calling ftrace_free_filter(). This adds a "free_filters" argument to unregister_ftrace_direct_multi() to both remind the user they should free filters and also to make their life easier. Link: https://lkml.kernel.org/r/20230321140424.345218-2-revest@chromium.orgSuggested-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Florent Revest <revest@chromium.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Jiri Olsa <jolsa@kernel.org> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
- 19 Mar, 2023 14 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-traceLinus Torvalds authored
Pull tracing fixes from Steven Rostedt: - Fix setting affinity of hwlat threads in containers Using sched_set_affinity() has unwanted side effects when being called within a container. Use set_cpus_allowed_ptr() instead - Fix per cpu thread management of the hwlat tracer: - Do not start per_cpu threads if one is already running for the CPU - When starting per_cpu threads, do not clear the kthread variable as it may already be set to running per cpu threads - Fix return value for test_gen_kprobe_cmd() On error the return value was overwritten by being set to the result of the call from kprobe_event_delete(), which would likely succeed, and thus have the function return success - Fix splice() reads from the trace file that was broken by commit 36e2c742 ("fs: don't allow splice read/write without explicit ops") - Remove obsolete and confusing comment in ring_buffer.c The original design of the ring buffer used struct page flags for tricks to optimize, which was shortly removed due to them being tricks. But a comment for those tricks remained - Set local functions and variables to static * tag 'trace-v6.3-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: tracing/hwlat: Replace sched_setaffinity with set_cpus_allowed_ptr ring-buffer: remove obsolete comment for free_buffer_page() tracing: Make splice_read available again ftrace: Set direct_ops storage-class-specifier to static trace/hwlat: Do not start per-cpu thread if it is already running trace/hwlat: Do not wipe the contents of per-cpu thread data tracing/osnoise: set several trace_osnoise.c variables storage-class-specifier to static tracing: Fix wrong return in kprobe_event_gen_test.c
-
Costa Shulyupin authored
There is a problem with the behavior of hwlat in a container, resulting in incorrect output. A warning message is generated: "cpumask changed while in round-robin mode, switching to mode none", and the tracing_cpumask is ignored. This issue arises because the kernel thread, hwlatd, is not a part of the container, and the function sched_setaffinity is unable to locate it using its PID. Additionally, the task_struct of hwlatd is already known. Ultimately, the function set_cpus_allowed_ptr achieves the same outcome as sched_setaffinity, but employs task_struct instead of PID. Test case: # cd /sys/kernel/tracing # echo 0 > tracing_on # echo round-robin > hwlat_detector/mode # echo hwlat > current_tracer # unshare --fork --pid bash -c 'echo 1 > tracing_on' # dmesg -c Actual behavior: [573502.809060] hwlat_detector: cpumask changed while in round-robin mode, switching to mode none Link: https://lore.kernel.org/linux-trace-kernel/20230316144535.1004952-1-costa.shul@redhat.com Cc: Masami Hiramatsu <mhiramat@kernel.org> Fixes: 0330f7aa ("tracing: Have hwlat trace migrate across tracing_cpumask CPUs") Signed-off-by:
Costa Shulyupin <costa.shul@redhat.com> Acked-by:
Daniel Bristot de Oliveira <bristot@kernel.org> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
Vlastimil Babka authored
The comment refers to mm/slob.c which is being removed. It comes from commit ed56829c ("ring_buffer: reset buffer page when freeing") and according to Steven the borrowed code was a page mapcount and mapping reset, which was later removed by commit e4c2ce82 ("ring_buffer: allocate buffer page pointer"). Thus the comment is not accurate anyway, remove it. Link: https://lore.kernel.org/linux-trace-kernel/20230315142446.27040-1-vbabka@suse.cz Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Ingo Molnar <mingo@elte.hu> Reported-by:
Mike Rapoport <mike.rapoport@gmail.com> Suggested-by:
Steven Rostedt (Google) <rostedt@goodmis.org> Fixes: e4c2ce82 ("ring_buffer: allocate buffer page pointer") Signed-off-by:
Vlastimil Babka <vbabka@suse.cz> Reviewed-by:
Mukesh Ojha <quic_mojha@quicinc.com> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
Sung-hun Kim authored
Since the commit 36e2c742 ("fs: don't allow splice read/write without explicit ops") is applied to the kernel, splice() and sendfile() calls on the trace file (/sys/kernel/debug/tracing /trace) return EINVAL. This patch restores these system calls by initializing splice_read in file_operations of the trace file. This patch only enables such functionalities for the read case. Link: https://lore.kernel.org/linux-trace-kernel/20230314013707.28814-1-sfoon.kim@samsung.com Cc: stable@vger.kernel.org Fixes: 36e2c742 ("fs: don't allow splice read/write without explicit ops") Signed-off-by:
Sung-hun Kim <sfoon.kim@samsung.com> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/ttyLinus Torvalds authored
Pull tty/serial driver fixes from Greg KH: "Here are some small tty and serial driver fixes for 6.3-rc3 to resolve some reported issues. They include: - 8250 driver Kconfig issue pointed out by you that showed up in -rc1 - qcom-geni serial driver fixes - various 8250 driver fixes for reported problems - fsl_lpuart driver fixes - serdev fix for regression in -rc1 - vt.c bugfix All have been in linux-next for over a week with no reported problems" * tag 'tty-6.3-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty: tty: vt: protect KD_FONT_OP_GET_TALL from unbound access serial: qcom-geni: drop bogus uart_write_wakeup() serial: qcom-geni: fix mapping of empty DMA buffer serial: qcom-geni: fix DMA mapping leak on shutdown serial: qcom-geni: fix console shutdown hang serdev: Set fwnode for serdev devices tty: serial: fsl_lpuart: fix race on RX DMA shutdown serial: 8250_pci1xxxx: Disable SERIAL_8250_PCI1XXXX config by default serial: 8250_fsl: fix handle_irq locking serial: 8250_em: Fix UART port type serial: 8250: ASPEED_VUART: select REGMAP instead of depending on it tty: serial: fsl_lpuart: skip waiting for transmission complete when UARTCTRL_SBK is asserted Revert "tty: serial: fsl_lpuart: adjust SERIAL_FSL_LPUART_CONSOLE config dependency"
-
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-miscLinus Torvalds authored
Pull char/misc driver fixes from Greg KH: "Here are a few small char/misc/other driver subsystem patches to resolve reported problems for 6.3-rc3. Included in here are: - Interconnect driver fixes for reported problems - Memory driver fixes for reported problems - nvmem core fix - firmware driver fix for reported problem All of these have been in linux-next for a while with no reported issues" * tag 'char-misc-6.3-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (23 commits) memory: tegra30-emc: fix interconnect registration race memory: tegra20-emc: fix interconnect registration race memory: tegra124-emc: fix interconnect registration race memory: tegra: fix interconnect registration race interconnect: exynos: drop redundant link destroy interconnect: exynos: fix registration race interconnect: exynos: fix node leak in probe PM QoS error path interconnect: qcom: msm8974: fix registration race interconnect: qcom: rpmh: fix registration race interconnect: qcom: rpmh: fix probe child-node error handling interconnect: qcom: rpm: fix registration race nvmem: core: return -ENOENT if nvmem cell is not found firmware: xilinx: don't make a sleepable memory allocation from an atomic context interconnect: qcom: rpm: fix probe child-node error handling interconnect: qcom: osm-l3: fix registration race interconnect: imx: fix registration race interconnect: fix provider registration API interconnect: fix icc_provider_del() error handling interconnect: fix mem leak when freeing nodes interconnect: qcom: qcm2290: Fix MASTER_SNOC_BIMC_NRT ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull RAS fix from Borislav Petkov: - Flush out logged errors immediately after MCA banks configuration changes over sysfs have been done instead of waiting until something else triggers the workqueue later - another error or the polling interval cycle is reached * tag 'ras_urgent_for_v6.3_rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/mce: Make sure logged MCEs are processed after sysfs update
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull perf fixes from Borislav Petkov: - Check whether sibling events have been deactivated before adding them to groups - Update the proper event time tracking variable depending on the event type - Fix a memory overwrite issue due to using the wrong function argument when outputting perf events * tag 'perf_urgent_for_v6.3_rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf: Fix check before add_event_to_groups() in perf_group_detach() perf: fix perf_event_context->time perf/core: Fix perf_output_begin parameter is incorrectly invoked in perf_event_bpf_output
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull x86 fixes from Borislav Petkov: "There's a little bit more 'movement' in there for my taste but it needs to happen and should make the code better after it. - Check cmdline_find_option()'s return value before further processing - Clear temporary storage in the resctrl code to prevent access to an unexistent MSR - Add a simple throttling mechanism to protect the hypervisor from potentially malicious SEV guests issuing requests in rapid succession. In order to not jeopardize the sanity of everyone involved in maintaining this code, the request issuing side has received a cleanup, split in more or less trivial, small and digestible pieces. Otherwise, the code was threatening to become an unmaintainable mess. Therefore, that cleanup is marked indirectly also for stable so that there's no differences between the upstream code and the stable variant when it comes down to backporting more there" * tag 'x86_urgent_for_v6.3_rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/mm: Fix use of uninitialized buffer in sme_enable() x86/resctrl: Clear staged_config[] before and after it is used virt/coco/sev-guest: Add throttling awareness virt/coco/sev-guest: Convert the sw_exit_info_2 checking to a switch-case virt/coco/sev-guest: Do some code style cleanups virt/coco/sev-guest: Carve out the request issuing logic into a helper virt/coco/sev-guest: Remove the disable_vmpck label in handle_guest_request() virt/coco/sev-guest: Simplify extended guest request handling virt/coco/sev-guest: Check SEV_SNP attribute at probe time
-
git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4Linus Torvalds authored
Pull ext4 fix from Ted Ts'o: "Fix a double unlock bug on an error path in ext4, found by smatch and syzkaller" * tag 'ext4_for_linus_urgent' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: ext4: fix possible double unlock when moving a directory
-
Tom Rix authored
smatch reports this warning kernel/trace/ftrace.c:2594:19: warning: symbol 'direct_ops' was not declared. Should it be static? The variable direct_ops is only used in ftrace.c, so it should be static Link: https://lore.kernel.org/linux-trace-kernel/20230311135113.711824-1-trix@redhat.comSigned-off-by:
Tom Rix <trix@redhat.com> Acked-by:
Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
Tero Kristo authored
The hwlatd tracer will end up starting multiple per-cpu threads with the following script: #!/bin/sh cd /sys/kernel/debug/tracing echo 0 > tracing_on echo hwlat > current_tracer echo per-cpu > hwlat_detector/mode echo 100000 > hwlat_detector/width echo 200000 > hwlat_detector/window echo 1 > tracing_on To fix the issue, check if the hwlatd thread for the cpu is already running, before starting a new one. Along with the previous patch, this avoids running multiple instances of the same CPU thread on the system. Link: https://lore.kernel.org/all/20230302113654.2984709-1-tero.kristo@linux.intel.com/ Link: https://lkml.kernel.org/r/20230310100451.3948583-3-tero.kristo@linux.intel.com Cc: stable@vger.kernel.org Fixes: f46b1652 ("trace/hwlat: Implement the per-cpu mode") Signed-off-by:
Tero Kristo <tero.kristo@linux.intel.com> Acked-by:
Daniel Bristot de Oliveira <bristot@kernel.org> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
Tero Kristo authored
Do not wipe the contents of the per-cpu kthread data when starting the tracer, as this will completely forget about already running instances and can later start new additional per-cpu threads. Link: https://lore.kernel.org/all/20230302113654.2984709-1-tero.kristo@linux.intel.com/ Link: https://lkml.kernel.org/r/20230310100451.3948583-2-tero.kristo@linux.intel.com Cc: stable@vger.kernel.org Fixes: f46b1652 ("trace/hwlat: Implement the per-cpu mode") Signed-off-by:
Tero Kristo <tero.kristo@linux.intel.com> Acked-by:
Daniel Bristot de Oliveira <bristot@kernel.org> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-