- 01 Mar, 2022 22 commits
-
-
Michael Ellerman authored
When CONFIG_GENERIC_CPU=y (true for all our defconfigs) we pass -mcpu=powerpc64 to the compiler, even when we're building a 32-bit kernel. This happens because we have an ifdef CONFIG_PPC_BOOK3S_64/else block in the Makefile that was written before 32-bit supported GENERIC_CPU. Prior to that the else block only applied to 64-bit Book3E. The GCC man page says -mcpu=powerpc64 "[specifies] a pure ... 64-bit big endian PowerPC ... architecture machine [type], with an appropriate, generic processor model assumed for scheduling purposes." It's unclear how that interacts with -m32, which we are also passing, although obviously -m32 is taking precedence in some sense, as the 32-bit kernel only contains 32-bit instructions. This was noticed by inspection, not via any bug reports, but it does affect code generation. Comparing before/after code generation, there are some changes to instruction scheduling, and the after case (with -mcpu=powerpc64 removed) the compiler seems more keen to use r8. Fix it by making the else case only apply to Book3E 64, which excludes 32-bit. Fixes: 0e00a8c9 ("powerpc: Allow CPU selection also on PPC32") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220215112858.304779-1-mpe@ellerman.id.au
-
Daniel Henrique Barboza authored
Executing node_set_online() when nid = NUMA_NO_NODE results in an undefined behavior. node_set_online() will call node_set_state(), into __node_set(), into set_bit(), and since NUMA_NO_NODE is -1 we'll end up doing a negative shift operation inside arch/powerpc/include/asm/bitops.h. This potential UB was detected running a kernel with CONFIG_UBSAN. The behavior was introduced by commit 10f78fd0 ("powerpc/numa: Fix a regression on memoryless node 0"), where the check for nid > 0 was removed to fix a problem that was happening with nid = 0, but the result is that now we're trying to online NUMA_NO_NODE nids as well. Checking for nid >= 0 will allow node 0 to be onlined while avoiding this UB with NUMA_NO_NODE. Fixes: 10f78fd0 ("powerpc/numa: Fix a regression on memoryless node 0") Reported-by: Ping Fang <pifang@redhat.com> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220224182312.1012527-1-danielhb413@gmail.com
-
Christophe Leroy authored
Since commit ceff77ef ("powerpc/64e/interrupt: Use new interrupt context tracking scheme") struct interrupt_state has been empty and unused. Remove it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1d862ce3eab3da6ca7ac47d4a78a18f154462511.1645806970.git.christophe.leroy@csgroup.eu
-
Hari Bathini authored
Crash recovery (fadump) is setup in the userspace by some service. This service rebuilds initrd with dump capture capability, if it is not already dump capture capable before proceeding to register for firmware assisted dump (echo 1 > /sys/kernel/fadump/registered). But arming the kernel with crash recovery support does not have to wait for userspace configuration. So, register for fadump while setting it up itself. This can at worst lead to a scenario, where /proc/vmcore is ready afer crash but the initrd does not know how/where to offload it, which is always better than not having a /proc/vmcore at all due to incomplete configuration in the userspace at the time of crash. Commit 0823c68b ("powerpc/fadump: re-register firmware-assisted dump if already registered") ensures this change does not break userspace. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> [mpe: Reword comment] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220201105305.155511-1-hbathini@linux.ibm.com
-
Kajol Jain authored
The testcase uses event code 0x35340401e0 to verify the settings for different fields in Monitor Mode Control Register A (MMCRA). The fields include thresh_start, thresh_stop thresh_select, sdar mode, sample and marked bit. Checks if these fields are translated correctly via perf interface to MMCRA. Signed-off-by: Kajol Jain <kjain@linux.ibm.com> [mpe: Add error checking, drop GET_MMCR_FIELD, add to .gitignore] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220127072012.662451-21-kjain@linux.ibm.com
-
Kajol Jain authored
The testcase uses event code 0x1340000001c040 to verify the settings for different src fields in Monitor Mode Control Register 3 (MMCR3). Checks if these fields are translated correctly via perf interface to MMCR3 on ISA v3.1 platform. Signed-off-by: Kajol Jain <kjain@linux.ibm.com> [mpe: Add error checking, drop GET_MMCR_FIELD, add to .gitignore] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220127072012.662451-20-kjain@linux.ibm.com
-
Madhavan Srinivasan authored
The testcases uses cycles event to verify the freeze counter settings in Monitor Mode Control Register 2 (MMCR2). Event modifier (exclude_kernel) setting is used for the event attribute to check the FCxS and FCxH ( Freeze counter in privileged and hypervisor state ) settings via perf interface. Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> [mpe: Add error checking, check MSR for MSR_HV, drop GET_MMCR_FIELD, add to .gitignore] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220127072012.662451-19-kjain@linux.ibm.com
-
Madhavan Srinivasan authored
The testcases uses event code 0x010000046080 to verify the l2l3 bit setting for Monitor Mode Control Register 2 (MMCR2). check if this bit is set correctly via perf interface in ISA v3.1 platform. Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> [mpe: Add error checking, drop GET_MMCR_FIELD, add to .gitignore] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220127072012.662451-18-kjain@linux.ibm.com
-
Athira Rajeev authored
The testcase uses event code "0x26880" to verify the settings for different fields in Monitor Mode Control Register 1 (MMCR1). The field include PMCxCOMB. Checks if this field are translated correctly via perf interface to MMCR1 Add selftest for mmcr1 comb field. Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> [mpe: Add error checking, drop GET_MMCR_FIELD, add to .gitignore] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220127072012.662451-16-kjain@linux.ibm.com
-
Athira Rajeev authored
The testcase uses event code 0x500fa to verify the FC5-6 bit setting in Monitor Mode Control Register 0 (MMCR0). Check if FC5-6 bit is not set in MMCR0 when using Performance Monitor Counter 5 and 6 (PMC5 and PMC6). Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> [mpe: Add error checking, drop GET_MMCR_FIELD, add to .gitignore] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220127072012.662451-15-kjain@linux.ibm.com
-
Athira Rajeev authored
The testcase uses event code 0x1001e to verify two bit settings (FC5-6 and PMC1CE) in Monitor Mode Control Register 0 (MMCR0). Check if FC5-6 bit to be set in MMCR0 when not using Performance Monitor Counter 5 and 6 (PMC5 and PMC6). And also PMC1CE is expected to be set when using PMC1. Test if these fields are programmed correctly via perf interface. Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> [mpe: Add error checking, drop GET_MMCR_FIELD, add to .gitignore] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220127072012.662451-14-kjain@linux.ibm.com
-
Athira Rajeev authored
The testcase uses event code 0x500fa ("instructions") to verify the PMCjCE bit setting in Monitor Mode Control Register 0 (MMCR0). This bit is expected to be set in MMCR0 when using Performance Monitor Counter 5 (PMC5). Checks if perf interface sets this bit correctly. Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> [mpe: Add error checking, drop GET_MMCR_FIELD, add to .gitignore] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220127072012.662451-13-kjain@linux.ibm.com
-
Athira Rajeev authored
The testcase uses cycles event to check the PMCCEXT bit setting in Monitor Mode Control Register 0 (MMCR0). Check if perf interface sets this control bit in MMCR0 on ISA v3.1 platform. Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> [mpe: Add error checking, drop GET_MMCR_FIELD, add to .gitignore] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220127072012.662451-12-kjain@linux.ibm.com
-
Athira Rajeev authored
The testcase uses event code 0x500fa ("instructions") to check the CC56RUN bit setting in Monitor Mode Control Register 0(MMCR0). In ISA v3.1 platform, this bit is expected to be set in MMCR0 when using Performance Monitor Counter 5 and 6 (PMC5 and PMC6). Verify this is done correctly by perf interface. CC56RUN bit makes PMC5 and PMC6 count regardless of the run latch state. This bit is set in power10 since PMC5 and PMC6 is used in power10 for counting instructions and cycles. Hence added a check to skip this test in other platforms Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> [mpe: Add error checking, drop GET_MMCR_FIELD, add to .gitignore] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220127072012.662451-11-kjain@linux.ibm.com
-
Athira Rajeev authored
The testcase uses "instructions" event to verify two bits(PMAE and PMAO) in Monitor Mode Control Register 0 (MMCR0). At the time of interrupt, pmae bit ( which enables performance monitor exception ) is expected to be cleared and pmao (which indicates performance monitor alert) bit is expected to be set in MMCR0. And testcases handles these checks. Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> [mpe: Add error checking, drop GET_MMCR_FIELD, add to .gitignore] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220127072012.662451-10-kjain@linux.ibm.com
-
Kajol Jain authored
Add macro and utility functions to fetch individual fields from Monitor Mode Control Register 3(MMCR3)and Monitor Mode Control Register A(MMCRA) PMU registers Signed-off-by: Kajol Jain <kjain@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220127072012.662451-9-kjain@linux.ibm.com
-
Athira Rajeev authored
Add macro and utility functions to fetch individual fields from Monitor Mode Control Register 0(MMCR0) and Monitor Mode Control Register 1(MMCR1) PMU register. Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220127072012.662451-8-kjain@linux.ibm.com
-
Madhavan Srinivasan authored
Along with it, Add macros and utility functions to fetch individual fields from Monitor Mode Control Register 2(MMCR2) register. Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220127072012.662451-7-kjain@linux.ibm.com
-
Madhavan Srinivasan authored
Extended event_init_opts() to include initialization of sampling testcases. Patch adds an event_init_sampling() wrapper to initialize event attribute fields for sampling events. This includes initializing sample period, sample type and event type. Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220127072012.662451-6-kjain@linux.ibm.com
-
Kajol Jain authored
Add couple of basic utility functions to post process the mmap buffer. It includes function to read the total number of samples present in the mmap buffer and function to get the address of the first sample. Add function "get_intr_regs" which will return pointer to interrupt registers present in the sample, incase sample type PERF_SAMPLE_REGS_INTR is set. Add functions "get_reg_value" which can be used to read any interrupt register value from a given sample. Signed-off-by: Kajol Jain <kjain@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220127072012.662451-5-kjain@linux.ibm.com
-
Madhavan Srinivasan authored
Each platform has raw event encoding format which specifies the bit positions for different fields. The fields from event code gets translated into performance monitoring mode control register (MMCRx) settings. Patch add macros to extract individual fields from the event code. Add functions for sanity checks, since testcases currently are only supported in power9 and power10. Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> [mpe: Read PVR directly rather than using /proc/cpuinfo] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220127072012.662451-4-kjain@linux.ibm.com
-
Athira Rajeev authored
Add support functions for enabling perf sampling test in a new folder "sampling_tests" under "selftests/powerpc/pmu". This includes support functions for allocating and processing the mmap buffer. These functions are added/defined in "sampling_tests/misc.*" files. Also updates the corresponding Makefiles in "selftests/powerpc" and "sampling_tests" folder. Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> [mpe: Drop unneeded bits from the Makefile] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220127072012.662451-3-kjain@linux.ibm.com
-
- 28 Feb, 2022 1 commit
-
-
Athira Rajeev authored
To enable the capturing of samples as part of perf event, add a new field "mmap_buffer" to "struct event". This field is a place-holder for sample collection Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220127072012.662451-2-kjain@linux.ibm.com
-
- 24 Feb, 2022 2 commits
-
-
Guo Zhengkui authored
Fix following coccicheck warning: ./arch/powerpc/kernel/module_64.c:432:40-41: WARNING: Use ARRAY_SIZE. ARRAY_SIZE(arr) is a macro provided by the kernel. It makes sure that arr is an array, so it's safer than sizeof(arr) / sizeof(arr[0]) and more standard. Signed-off-by: Guo Zhengkui <guozhengkui@vivo.com> Reviewed-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220223075426.20939-1-guozhengkui@vivo.com
-
Nicholas Piggin authored
Hash faults are not resoved in NMI context, instead causing the access to fail. This is done because perf interrupts can get backtraces including walking the user stack, and taking a hash fault on those could deadlock on the HPTE lock if the perf interrupt hits while the same HPTE lock is being held by the hash fault code. The user-access for the stack walking will notice the access failed and deal with that in the perf code. The reason to allow perf interrupts in is to better profile hash faults. The problem with this is any hash fault on a kernel access that happens in NMI context will crash, because kernel accesses must not fail. Hard lockups, system reset, machine checks that access vmalloc space including modules and including stack backtracing and symbol lookup in modules, per-cpu data, etc could all run into this problem. Fix this by disallowing perf interrupts in the hash fault code (the direct hash fault is covered by MSR[EE]=0 so the PMI disable just needs to extend to the preload case). This simplifies the tricky logic in hash faults and perf, at the cost of reduced profiling of hash faults. perf can still latch addresses when interrupts are disabled, it just won't get the stack trace at that point, so it would still find hot spots, just sometimes with confusing stack chains. An alternative could be to allow perf interrupts here but always do the slowpath stack walk if we are in nmi context, but that slows down all perf interrupt stack walking on hash though and it does not remove as much tricky code. Reported-by: Laurent Dufour <ldufour@linux.ibm.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Tested-by: Laurent Dufour <ldufour@linux.ibm.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220204035348.545435-1-npiggin@gmail.com
-
- 23 Feb, 2022 1 commit
-
-
Christophe Leroy authored
Following commit 12318163 ("powerpc/32: Remove remaining .stabs annotations"), stabs code are not used anymore. Remove them. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d8b33342d7454f6ca4f368f5206896558dfa06f4.1645538722.git.christophe.leroy@csgroup.eu
-
- 16 Feb, 2022 5 commits
-
-
Vaibhav Jain authored
Presently PAPR doesn't support injecting smart errors on an NVDIMM. This makes testing the NVDIMM health reporting functionality difficult as simulating NVDIMM health related events need a hacked up qemu version. To solve this problem this patch proposes simulating certain set of NVDIMM health related events in papr_scm. Specifically 'fatal' health state and 'dirty' shutdown state. These error can be injected via the user-space 'ndctl-inject-smart(1)' command. With the proposed patch and corresponding ndctl patches following command flow is expected: $ sudo ndctl list -DH -d nmem0 ... "health_state":"ok", "shutdown_state":"clean", ... # inject unsafe shutdown and fatal health error $ sudo ndctl inject-smart nmem0 -Uf ... "health_state":"fatal", "shutdown_state":"dirty", ... # uninject all errors $ sudo ndctl inject-smart nmem0 -N ... "health_state":"ok", "shutdown_state":"clean", ... The patch adds a new member 'health_bitmap_inject_mask' inside struct papr_scm_priv which is then bitwise ANDed to the health bitmap fetched from the hypervisor. The value for 'health_bitmap_inject_mask' is accessible from sysfs at nmemX/papr/health_bitmap_inject. A new PDSM named 'SMART_INJECT' is proposed that accepts newly introduced 'struct nd_papr_pdsm_smart_inject' as payload thats exchanged between libndctl and papr_scm to indicate the requested smart-error states. When the processing the PDSM 'SMART_INJECT', papr_pdsm_smart_inject() constructs a pair or 'inject_mask' and 'clear_mask' bitmaps from the payload and bit-blt it to the 'health_bitmap_inject_mask'. This ensures the after being fetched from the hypervisor, the health_bitmap reflects requested smart-error states. Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com> Signed-off-by: Shivaprasad G Bhat <sbhat@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220124202204.1488346-1-vaibhav@linux.ibm.com
-
Christophe Leroy authored
Add some line breaks to better match the file's style, add some space after comma and fix a couple of misplaced blanks. Suggested-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/973506292d0c7b05c06530c8e11803ce38e5eda2.1644949750.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
When FL_SAVE_REGS is not set we get here via ftrace_caller() which doesn't save all registers. ftrace_caller() explicitely clears regs.msr, so we can rely on it to know where we come from. We don't expect MSR register to be 0 at all when involving ftrace. Fixes: 40b035ef ("powerpc/ftrace: Implement CONFIG_DYNAMIC_FTRACE_WITH_ARGS") Reported-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2f9a7e898c93cc7438ef5ccd47cb9c3a9c5b53ef.1644949750.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
The function_graph_enter() does not provide any recursion protection. Add a protection in prepare_ftrace_return() in case function_graph_enter() calls something that gets function graph traced. Fixes: 83021378 ("powerpc/ftrace: directly call of function graph tracer by ftrace caller") Reported-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/74edf2ff0a60e66b0d9225a137100a86a0557032.1644949750.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Also save r1 in ftrace_caller() r1 is needed during unwinding when the function_graph tracer is active. Fixes: 83021378 ("powerpc/ftrace: directly call of function graph tracer by ftrace caller") Reported-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ff535e86d3a69376a6d89168511d4e403835f18b.1644949750.git.christophe.leroy@csgroup.eu
-
- 15 Feb, 2022 1 commit
-
-
Paul Menzel authored
Currently, `git status` lists the file as untracked by git, so tell git to ignore it. Fixes: aa3bc365 ("powerpc/ps3: Add check for otheros image size") Signed-off-by: Paul Menzel <pmenzel@molgen.mpg.de> Acked-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220214065543.198992-1-pmenzel@molgen.mpg.de
-
- 14 Feb, 2022 1 commit
-
-
Christophe Leroy authored
Warnings in assembly must use EMIT_WARN_ENTRY in order to generate the necessary entry in exception table. Check in EMIT_BUG_ENTRY that flags don't include BUGFLAG_WARNING. This change avoids problems like the one fixed by commit fd1eaaaa ("powerpc/64s: Use EMIT_WARN_ENTRY for SRR debug warnings"). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ddcb422102a37eb45f57694c7ef0ec6187964dff.1644742951.git.christophe.leroy@csgroup.eu
-
- 12 Feb, 2022 7 commits
-
-
Michael Ellerman authored
Our skiroot_defconfig doesn't enable FTRACE, and so doesn't get STACKTRACE enabled either. That leads to a build failure since commit 1614b2b1 ("arch: Make ARCH_STACKWALK independent of STACKTRACE") made stacktrace.c build even when STACKTRACE=n. arch/powerpc/kernel/stacktrace.c: In function ‘handle_backtrace_ipi’: arch/powerpc/kernel/stacktrace.c:171:2: error: implicit declaration of function ‘nmi_cpu_backtrace’ 171 | nmi_cpu_backtrace(regs); | ^~~~~~~~~~~~~~~~~ arch/powerpc/kernel/stacktrace.c: In function ‘arch_trigger_cpumask_backtrace’: arch/powerpc/kernel/stacktrace.c:226:2: error: implicit declaration of function ‘nmi_trigger_cpumask_backtrace’ 226 | nmi_trigger_cpumask_backtrace(mask, exclude_self, raise_backtrace_ipi); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This happens because our headers haven't defined arch_trigger_cpumask_backtrace, which causes lib/nmi_backtrace.c not to build nmi_cpu_backtrace(). The code in question doesn't actually depend on STACKTRACE=y, that was just added because arch_trigger_cpumask_backtrace() lived in stacktrace.c for convenience. So drop the dependency on CONFIG_STACKTRACE, that causes lib/nmi_backtrace.c to build nmi_cpu_backtrace() etc. and fixes the build. Fixes: 1614b2b1 ("arch: Make ARCH_STACKWALK independent of STACKTRACE") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220212111349.2806972-1-mpe@ellerman.id.au
-
Aneesh Kumar K.V authored
commit: d9c23400 ("Do not depend on MAX_ORDER when grouping pages by mobility") introduced pageblock_order which will be used to group pages better. The kernel now groups pages based on the value of HPAGE_SHIFT. Hence HPAGE_SHIFT should be set before we call set_pageblock_order. set_pageblock_order happens early in the boot and default hugetlb page size should be initialized before that to compute the right pageblock_order value. Currently, default hugetlbe page size is set via arch_initcalls which happens late in the boot as shown via the below callstack: [c000000007383b10] [c000000001289328] hugetlbpage_init+0x2b8/0x2f8 [c000000007383bc0] [c0000000012749e4] do_one_initcall+0x14c/0x320 [c000000007383c90] [c00000000127505c] kernel_init_freeable+0x410/0x4e8 [c000000007383da0] [c000000000012664] kernel_init+0x30/0x15c [c000000007383e10] [c00000000000cf14] ret_from_kernel_thread+0x5c/0x64 and the pageblock_order initialization is done early during the boot. [c0000000018bfc80] [c0000000012ae120] set_pageblock_order+0x50/0x64 [c0000000018bfca0] [c0000000012b3d94] sparse_init+0x188/0x268 [c0000000018bfd60] [c000000001288bfc] initmem_init+0x28c/0x328 [c0000000018bfe50] [c00000000127b370] setup_arch+0x410/0x480 [c0000000018bfed0] [c00000000127401c] start_kernel+0xb8/0x934 [c0000000018bff90] [c00000000000d984] start_here_common+0x1c/0x98 delaying default hugetlb page size initialization implies the kernel will initialize pageblock_order to (MAX_ORDER - 1) which is not an optimal value for mobility grouping. IIUC we always had this issue. But it was not a problem for hash translation mode because (MAX_ORDER - 1) is the same as HUGETLB_PAGE_ORDER (8) in the case of hash (16MB). With radix, HUGETLB_PAGE_ORDER will be 5 (2M size) and hence pageblock_order should be 5 instead of 8. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220211065215.101767-1-aneesh.kumar@linux.ibm.com
-
Ritesh Harjani authored
While debugging an issue, we wanted to check whether the arch specific kernel memmove implementation is correct. This selftest could help test that. Suggested-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Suggested-by: Vaibhav Jain <vaibhav@linux.ibm.com> Signed-off-by: Ritesh Harjani <riteshh@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/57242c1fe7aba6b7f0fcd0490303bfd5f222ee00.1631512686.git.riteshh@linux.ibm.com
-
Nathan Lynch authored
pseries_devicetree_update() has only one call site, in the same file in which it is defined. Make it static. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220207221247.354454-1-nathanl@linux.ibm.com
-
Christophe Leroy authored
Now that gettimeofday.S is unique, move cvdso_call macro into that file which is the only user. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/72720359d4c58e3a3b96dd74952741225faac3de.1642782130.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
cvdso_call_time macro is very similar to cvdso_call macro. Add a call_time argument to cvdso_call which is 0 by default and set to 1 when using cvdso_call to call __c_kernel_time(). Return returned value as is with CR[SO] cleared when it is used for time(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/837a260ad86fc1ce297a562c2117fd69be5f7b5c.1642782130.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
merge vdso64 into vdso32 and rename it vdso. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4dbe05cc130f6a0858d09ac72e436c373cb08b70.1642782130.git.christophe.leroy@csgroup.eu
-