- 03 Apr, 2021 27 commits
-
-
Christophe Leroy authored
Instead of using BPF register number as input in functions bpf_set_seen_register() and bpf_is_seen_register(), use CPU register number directly. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/0cd2506f598e7095ea43e62dca1f472de5474a0d.1616430991.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
At the time being, PPC32 has Classical BPF support. The test_bpf module exhibits some failure: test_bpf: #298 LD_IND byte frag jited:1 ret 202 != 66 FAIL (1 times) test_bpf: #299 LD_IND halfword frag jited:1 ret 51958 != 17220 FAIL (1 times) test_bpf: #301 LD_IND halfword mixed head/frag jited:1 ret 51958 != 1305 FAIL (1 times) test_bpf: #303 LD_ABS byte frag jited:1 ret 202 != 66 FAIL (1 times) test_bpf: #304 LD_ABS halfword frag jited:1 ret 51958 != 17220 FAIL (1 times) test_bpf: #306 LD_ABS halfword mixed head/frag jited:1 ret 51958 != 1305 FAIL (1 times) test_bpf: Summary: 371 PASSED, 7 FAILED, [119/366 JIT'ed] Fixing this is not worth the effort. Instead, remove support for classical BPF and prepare for adding Extended BPF support instead. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/fbc3e4fcc9c8f6131d6c705212530b2aa50149ee.1616430991.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Same spirit as commit debf122c ("powerpc/signal32: Simplify logging in handle_rt_signal32()"), remove this intermediate 'addr' local var. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/638fa99530beb29f82f94370057d110e91272acc.1616151715.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Add unsafe_get_user_sigset() and transform PPC32 get_sigset_t() into an unsafe version unsafe_get_sigset_t(). Then convert do_setcontext() and do_setcontext_tm() to use user_read_access_begin/end. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9273ba664db769b8d9c7540ae91395e346e4945e.1616151715.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Convert restore_user_regs() and restore_tm_user_regs() to use user_access_read_begin/end blocks. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/181adf15a6f644efcd1aeafb355f3578ff1b6bc5.1616151715.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
In restore_tm_user_regs(), regroup the reads from 'sr' and the ones from 'tm_sr' together in order to allow two block user accesses in following patch. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7c518b9a4c8e5ae9a3bfb647bc8b20bf820233af.1616151715.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
In preparation of using user_access_begin/end in restore_user_regs(), move the access_ok() inside the function. It makes no difference as the behaviour on a failed access_ok() is the same as on failed restore_user_regs(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c106eb2f37c3040f1fd38b40e50c670feb7cb835.1616151715.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
In the same spirit as commit f1cf4f93 ("powerpc/signal32: Remove ifdefery in middle of if/else") MSR_TM_ACTIVE() is always defined and returns always 0 when CONFIG_PPC_TRANSACTIONAL_MEM is not selected, so the awful ifdefery in the middle of an if/else can be removed. Make 'msr_hi' a 'long long' to avoid build failure on PPC32 due to the 32 bits left shift. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a4b48b2f0be1ef13fc8e57452b7f8350da28d521.1616151715.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Convention is to prefix functions with __unsafe_ instead of suffixing it with _unsafe. Rename save_user_regs_unsafe() and save_general_regs_unsafe() accordingly, that is respectively __unsafe_save_general_regs() and __unsafe_save_user_regs(). Suggested-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8cef43607e5b35a7fd0829dec812d88beb570df2.1616151715.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Add unsafe_copy_ckfpr_from_user() and unsafe_copy_ckvsx_from_user() Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1040687aa27553d19f749f7fb48f0c07af98ee2d.1616151715.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Similarly to commit 5cf773fc8f37 ("powerpc/uaccess: Also perform 64 bits copies in unsafe_copy_to_user() on ppc32") ppc32 has an efficiant 64 bits unsafe_get_user(), so also use it in order to unroll loops more. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/308e65d9237a14e8c0e3b22919fcf0b5e5592608.1616151715.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
In the same way as commit 14026b94 ("signal: Add unsafe_put_compat_sigset()"), this time add unsafe_get_compat_sigset() macro which is the 'unsafe' version of get_compat_sigset() For the bigendian, use unsafe_get_user() directly to avoid intermediate copy through the stack. For the littleendian, use a straight unsafe_copy_from_user(). This commit adds the generic fallback for unsafe_copy_from_user(). Architectures wanting to use unsafe_get_compat_sigset() have to make sure they have their own unsafe_copy_from_user(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b05bf434ee13c76bc9df5f02653a10db5e7b54e5.1616151715.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
clang 11 and future GCC are supporting asm goto with outputs. Use it to implement get_user in order to get better generated code. Note that clang requires to set x in the default branch of __get_user_size_goto() otherwise is compliant about x not being initialised :puzzled: Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/403745b5aaa1b315bb4e8e46c1ba949e77eecec0.1615398265.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
We have got two places doing a goto based on the result of __get_user_size_allowed(). Refactor that into __get_user_size_goto(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/def8a39289e02653cfb1583b3b19837de9efed3a.1615398265.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Make get_user() do the access_ok() check then call __get_user(). Make put_user() do the access_ok() check then call __put_user(). Then embed __get_user_size() and __put_user_size() in __get_user() and __put_user(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/eebc554f6a81f570c46ea3551000ff5b886e4faa.1615398265.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
__get_user_check() becomes get_user() __put_user_check() becomes put_user() __get_user_nocheck() becomes __get_user() __put_user_nocheck() becomes __put_user() Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/41d7e45f4733f0e61e63824e4865b4e049db74d6.1615398265.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
One part of __get_user_nocheck() is used for __get_user(), the other part for unsafe_get_user(). Move the part dedicated to unsafe_get_user() in it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/618fe2e0626b308a5a063d5baac827b968e85c32.1615398265.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
__get_user_bad() and __put_user_bad() are functions that are declared but not defined, in order to make the link fail in case they are called. Nowadays, we have BUILD_BUG() and BUILD_BUG_ON() for that, and they have the advantage to break the build earlier as it breaks it at compile time instead of link time. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d7d839e994f49fae4ff7b70fac72bd951272436b.1615398265.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Commit d02f6b7d ("powerpc/uaccess: Evaluate macro arguments once, before user access is allowed") changed the __chk_user_ptr() argument from the passed ptr pointer to the locally declared __gu_addr. But __gu_addr is locally defined as __user so the check is pointless. During kernel build __chk_user_ptr() voids and is only evaluated during sparse checks so it should have been armless to leave the original pointer check there. Nevertheless, this check is indeed redundant with the assignment above which casts the ptr pointer to the local __user __gu_addr. In case of mismatch, sparse will detect it there, so the __check_user_ptr() is not needed anywhere else than in access_ok(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/69f17d75046733b891ab2e668dbf464787cdf598.1615398265.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
__unsafe_put_user_goto() is just an intermediate layer to __put_user_size_goto() without added value other than doing the __user pointer type checking. Do the __user pointer type checking in __put_user_size_goto() and remove __unsafe_put_user_goto(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b6552149209aebd887a6977272b06a41256bdb9f.1615398265.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Commit 6bfd93c3 ("powerpc: Fix incorrect might_sleep in __get_user/__put_user on kernel addresses") added a check to not call might_sleep() on kernel addresses. This was to enable the use of __get_user() in the alignment exception handler for any address. Then commit 95156f00 ("lockdep, mm: fix might_fault() annotation") added a check of the address space in might_fault(), based on set_fs() logic. But this didn't solve the powerpc alignment exception case as it didn't call set_fs(KERNEL_DS). Nowadays, set_fs() is gone, previous patch fixed the alignment exception handler and __get_user/__put_user are not supposed to be used anymore to read kernel memory. Therefore the is_kernel_addr() check has become useless and can be removed. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e0a980a4dc7a2551183dd5cb30f46eafdbee390c.1615398265.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
In the old days, when we didn't have kernel userspace access protection and had set_fs(), it was wise to use __get_user() and friends to read kernel memory. Nowadays, get_user() is granting userspace access and is exclusively for userspace access. In alignment exception handler, use probe_kernel_read_inst() instead of __get_user_instr() for reading instructions in kernel. This will allow to remove the is_kernel_addr() check in __get/put_user() in a following patch. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d9ecbce00178484e66ca7adec2ff210058037704.1615398265.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Those helpers use get_user helpers but they don't participate in their implementation, so they do not belong to asm/uaccess.h Move them in asm/inst.h Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2c6e83581b4fa434aa7cf2fa7714c41e98f57007.1615398265.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Powerpc is the only architecture having _inatomic variants of __get_user() and __put_user() accessors. They were introduced by commit e68c825b ("[POWERPC] Add inatomic versions of __get_user and __put_user"). Those variants expand to the _nosleep macros instead of expanding to the _nocheck macros. The only difference between the _nocheck and the _nosleep macros is the call to might_fault(). Since commit 662bbcb2 ("mm, sched: Allow uaccess in atomic with pagefault_disable()"), __get/put_user() can be used in atomic parts of the code, therefore __get/put_user_inatomic() have become useless. Remove __get_user_inatomic() and __put_user_inatomic(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1e5c895669e8d54a7810b62dc61eb111f33c2c37.1615398265.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
This patch converts emulate_spe() to using user_access_begin logic. Since commit 662bbcb2 ("mm, sched: Allow uaccess in atomic with pagefault_disable()"), might_fault() doesn't fire when called from sections where pagefaults are disabled, which must be the case when using _inatomic variants of __get_user and __put_user. So the might_fault() in user_access_begin() is not a problem. There was a verification of user_mode() together with the access_ok(), but there is a second verification of user_mode() just after, that leads to immediate return. The access_ok() is now part of the user_access_begin which is called after that other user_mode() verification, so no need to check user_mode() again. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c95a648fdf75992c9d88f3c73cc23e7537fcf2ad.1615555354.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Define simple ___get_user_instr() for ppc32 instead of defining ppc32 versions of the three get_user_instr() helpers. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e02f83ec74f26d76df2874f0ce4d5cc69c3469ae.1615398265.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Those two macros have only one user which is unsafe_get_user(). Put everything in one place and remove them. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/439179c5e54c18f2cb8bdf1eea13ea0ef6b98375.1615398265.git.christophe.leroy@csgroup.eu
-
- 31 Mar, 2021 2 commits
-
-
Aneesh Kumar K.V authored
This reverts commit 675bceb0 ("powerpc/mm: Remove DEBUG_VM_PGTABLE support on powerpc") All the related issues are fixed as of commit: f14312e1 ("mm/debug_vm_pgtable: avoid doing memory allocation with pgtable_t mapped.") Hence re-enable it. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210318034855.74513-1-aneesh.kumar@linux.ibm.com
-
Michael Ellerman authored
The vio bus is a fake bus, which we use on pseries LPARs (guests) to discover devices provided by the hypervisor. There's no need or sense in creating the vio bus on bare metal systems. Which is why commit 4336b933 ("powerpc/pseries: Make vio and ibmebus initcalls pseries specific") made the initialisation of the vio bus only happen in LPARs. However as a result of that commit we now see errors at boot on bare metal systems: Driver 'hvc_console' was unable to register with bus_type 'vio' because the bus was not initialized. Driver 'tpm_ibmvtpm' was unable to register with bus_type 'vio' because the bus was not initialized. This happens because those drivers are built-in, and are calling vio_register_driver(). It in turn calls driver_register() with a reference to vio_bus_type, but we haven't registered vio_bus_type with the driver core. Fix it by also guarding vio_register_driver() with a check to see if we are on pseries. Fixes: 4336b933 ("powerpc/pseries: Make vio and ibmebus initcalls pseries specific") Reported-by: Paul Menzel <pmenzel@molgen.mpg.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Tested-by: Paul Menzel <pmenzel@molgen.mpg.de> Reviewed-by: Tyrel Datwyler <tyreld@linux.ibm.com> Link: https://lore.kernel.org/r/20210316010938.525657-1-mpe@ellerman.id.au
-
- 29 Mar, 2021 11 commits
-
-
dingsenjie authored
Remove unneeded variable: "rc". Signed-off-by: dingsenjie <dingsenjie@yulong.com> Reviewed-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210326115356.12444-1-dingsenjie@163.com
-
Chen Huang authored
When compiling the powerpc with the SMP disabled, it shows the issue: arch/powerpc/kernel/watchdog.c: In function ‘watchdog_smp_panic’: arch/powerpc/kernel/watchdog.c:177:4: error: implicit declaration of function ‘smp_send_nmi_ipi’; did you mean ‘smp_send_stop’? [-Werror=implicit-function-declaration] 177 | smp_send_nmi_ipi(c, wd_lockup_ipi, 1000000); | ^~~~~~~~~~~~~~~~ | smp_send_stop cc1: all warnings being treated as errors make[2]: *** [scripts/Makefile.build:273: arch/powerpc/kernel/watchdog.o] Error 1 make[1]: *** [scripts/Makefile.build:534: arch/powerpc/kernel] Error 2 make: *** [Makefile:1980: arch/powerpc] Error 2 make: *** Waiting for unfinished jobs.... We found that powerpc used ipi to implement hardlockup watchdog, so the HAVE_HARDLOCKUP_DETECTOR_ARCH should depend on the SMP. Fixes: 2104180a ("powerpc/64s: implement arch-specific hardlockup watchdog") Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Chen Huang <chenhuang5@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210327094900.938555-1-chenhuang5@huawei.com
-
Daniel Henrique Barboza authored
One of the reasons that dlpar_cpu_offline can fail is when attempting to offline the last online CPU of the kernel. This can be observed in a pseries QEMU guest that has hotplugged CPUs. If the user offlines all other CPUs of the guest, and a hotplugged CPU is now the last online CPU, trying to reclaim it will fail. The current error message in this situation returns rc with -EBUSY and a generic explanation, e.g.: pseries-hotplug-cpu: Failed to offline CPU PowerPC,POWER9, rc: -16 EBUSY can be caused by other conditions, such as cpu_hotplug_disable being true. Throwing a more specific error message for this case, instead of just "Failed to offline CPU", makes it clearer that the error is in fact a known error situation instead of other generic/unknown cause. This patch adds a 'last online' check in dlpar_cpu_offline() to catch the 'last online CPU' offline error, eturning a more informative error message: pseries-hotplug-cpu: Unable to remove last online CPU PowerPC,POWER9 Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210323205056.52768-2-danielhb413@gmail.com
-
Randy Dunlap authored
Drop the 'beginning of kernel-doc' notation markers (/**) in places that are not in kernel-doc format. Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reviewed-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210325200820.16594-1-rdunlap@infradead.org
-
Bhaskar Chowdhury authored
s/filesytem/filesystem/ s/symantics/semantics/ Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com> Acked-by: Randy Dunlap <rdunlap@infradead.org> Acked-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210322023307.168754-1-unixbhaskar@gmail.com
-
Christophe Leroy authored
call_do_irq() and call_do_softirq() are simple enough to be worth inlining. Inlining them avoids an mflr/mtlr pair plus a save/reload on stack. This is inspired from S390 arch. Several other arches do more or less the same. The way sparc arch does seems odd thought. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210320122227.345427-1-mpe@ellerman.id.au
-
He Ying authored
Sparse warns: warning: symbol 'rfi_flush' was not declared. warning: symbol 'entry_flush' was not declared. warning: symbol 'uaccess_flush' was not declared. Define 'entry_flush' and 'uaccess_flush' as static because they are not referenced outside the file. Include asm/security_features.h in which 'rfi_flush' is declared. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: He Ying <heying24@huawei.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210316041148.29694-1-heying24@huawei.com
-
Christophe Leroy authored
Commit 92c8c16f ("powerpc/embedded6xx: Remove C2K board support") moved the last selector of CONFIG_MV64X60. As it is not a user selectable config, it can be removed. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by: Wolfram Sang <wsa@kernel.org> # for I2C Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/19e57d16692dcd1ca67ba880d7273a57fab416aa.1616085654.git.christophe.leroy@csgroup.eu
-
kernel test robot authored
arch/powerpc/kernel/iommu.c:76:2-16: WARNING: NULL check before some freeing functions is not needed. NULL check before some freeing functions is not needed. Based on checkpatch warning "kfree(NULL) is safe this check is probably not required" and kfreeaddr.cocci by Julia Lawall. Generated by: scripts/coccinelle/free/ifnullfree.cocci Fixes: 691602aa ("powerpc/iommu/debug: Add debugfs entries for IOMMU tables") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: kernel test robot <lkp@intel.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210318234441.GA63469@f8e20a472e81
-
Christophe Leroy authored
It seems like other architectures, namely x86 and arm64 and riscv at least, include the running function as top entry when saving stack trace with save_stack_trace_regs(). Functionnalities like KFENCE expect it. Do the same on powerpc, it allows KFENCE and other users to properly identify the faulting function as depicted below. Before the patch KFENCE was identifying finish_task_switch.isra as the faulting function. [ 14.937370] ================================================================== [ 14.948692] BUG: KFENCE: invalid read in test_invalid_access+0x54/0x108 [ 14.948692] [ 14.956814] Invalid read at 0xdf98800a: [ 14.960664] test_invalid_access+0x54/0x108 [ 14.964876] finish_task_switch.isra.0+0x54/0x23c [ 14.969606] kunit_try_run_case+0x5c/0xd0 [ 14.973658] kunit_generic_run_threadfn_adapter+0x24/0x30 [ 14.979079] kthread+0x15c/0x174 [ 14.982342] ret_from_kernel_thread+0x14/0x1c [ 14.986731] [ 14.988236] CPU: 0 PID: 111 Comm: kunit_try_catch Tainted: G B 5.12.0-rc1-01537-g95f6e2088d7e-dirty #4682 [ 14.999795] NIP: c016ec2c LR: c02f517c CTR: c016ebd8 [ 15.004851] REGS: e2449d90 TRAP: 0301 Tainted: G B (5.12.0-rc1-01537-g95f6e2088d7e-dirty) [ 15.015274] MSR: 00009032 <EE,ME,IR,DR,RI> CR: 22000004 XER: 00000000 [ 15.022043] DAR: df98800a DSISR: 20000000 [ 15.022043] GPR00: c02f517c e2449e50 c1142080 e100dd24 c084b13c 00000008 c084b32b c016ebd8 [ 15.022043] GPR08: c0850000 df988000 c0d10000 e2449eb0 22000288 [ 15.040581] NIP [c016ec2c] test_invalid_access+0x54/0x108 [ 15.046010] LR [c02f517c] kunit_try_run_case+0x5c/0xd0 [ 15.051181] Call Trace: [ 15.053637] [e2449e50] [c005a68c] finish_task_switch.isra.0+0x54/0x23c (unreliable) [ 15.061338] [e2449eb0] [c02f517c] kunit_try_run_case+0x5c/0xd0 [ 15.067215] [e2449ed0] [c02f648c] kunit_generic_run_threadfn_adapter+0x24/0x30 [ 15.074472] [e2449ef0] [c004e7b0] kthread+0x15c/0x174 [ 15.079571] [e2449f30] [c001317c] ret_from_kernel_thread+0x14/0x1c [ 15.085798] Instruction dump: [ 15.088784] 8129d608 38e7ebd8 81020280 911f004c 39000000 995f0024 907f0028 90ff001c [ 15.096613] 3949000a 915f0020 3d40c0d1 3d00c085 <8929000a> 3908adb0 812a4b98 3d40c02f [ 15.104612] ================================================================== Fixes: 35de3b1a ("powerpc: Implement save_stack_trace_regs() to enable kprobe stack tracing") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by: Marco Elver <elver@google.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/21324f9e2f21d1640c8397b4d1d857a9355a2283.1615881400.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
This patch converts powerpc stacktrace to the generic ARCH_STACKWALK implemented by commit 214d8ca6 ("stacktrace: Provide common infrastructure") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/73b36bbb101299760b95ecd2cd3a46554bea8bf9.1615881400.git.christophe.leroy@csgroup.eu
-