- 15 Jul, 2020 16 commits
-
-
Christophe Leroy authored
Since commit ("1bd79336 powerpc: Fix various syscall/signal/swapcontext bugs"), getting save_general_regs() called without FULL_REGS() is very unlikely and generates a warning. The 32-bit version of save_general_regs() doesn't take care of it at all and copies all registers anyway since that commit. Moreover, commit 965dd3ad ("powerpc/64/syscall: Remove non-volatile GPR save optimisation") is another reason why it would never happen. So the same with 64-bit, don't worry about FULL_REGS() and copy all registers all the time. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/173de3b659fa3a5f126a0eb170522cccd909950f.1594125164.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Doing kasan pages allocation in MMU_init is too early, kernel doesn't have access yet to the entire memory space and memblock_alloc() fails when the kernel is a bit big. Do it from kasan_init() instead. Fixes: 2edb16ef ("powerpc/32: Add KASAN support") Fixes: d2a91cef ("powerpc/kasan: Fix shadow pages allocation failure") Cc: stable@vger.kernel.org Reported-by: Erhard F. <erhard_f@mailbox.org> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://bugzilla.kernel.org/show_bug.cgi?id=208181 Link: https://lore.kernel.org/r/63048fcea8a1c02f75429ba3152f80f7853f87fc.1593690707.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
This reverts commit d2a91cef. This commit moved too much work in kasan_init(). The allocation of shadow pages has to be moved for the reason explained in that patch, but the allocation of page tables still need to be done before switching to the final hash table. First revert the incorrect commit, following patch redoes it properly. Fixes: d2a91cef ("powerpc/kasan: Fix shadow pages allocation failure") Cc: stable@vger.kernel.org Reported-by: Erhard F. <erhard_f@mailbox.org> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://bugzilla.kernel.org/show_bug.cgi?id=208181 Link: https://lore.kernel.org/r/3667deb0911affbf999b99f87c31c77d5e870cd2.1593690707.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Documentation wrongly tells that book3s/32 CPU have hash MMU. 603 and e300 core only have software loaded TLB. 755, 7450 family and e600 core have both hash MMU and software loaded TLB. This can be selected by setting a bit in HID2 (755) or HID0 (others). At the time being this is not supported by the kernel. Make this explicit in the documentation. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/261923c075d1cb49d02493685e8585d4ea2a5197.1593698951.git.christophe.leroy@csgroup.eu
-
Nicholas Piggin authored
Add a testcase that tries to trigger the FPU denormal exception on Power8 or earlier CPUs. Prior to commit 4557ac6b ("powerpc/64s/exception: Fix 0x1500 interrupt handler crash") this would trigger a crash such as: Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA PowerNV Modules linked in: iptable_mangle xt_MASQUERADE iptable_nat nf_nat xt_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ipt_REJECT nf_reject_ipv4 xt_tcpudp tun bridge stp llc ip6table_filter ip6_tables iptable_filter fuse kvm_hv binfmt_misc squashfs mlx4_ib ib_uverbs dm_multipath scsi_dh_rdac scsi_dh_alua ib_core mlx4_en sr_mod cdrom bnx2x lpfc mlx4_core crc_t10dif scsi_transport_fc sg mdio vmx_crypto crct10dif_vpmsum leds_powernv powernv_rng rng_core led_class powernv_op_panel sunrpc ip_tables x_tables autofs4 CPU: 159 PID: 6854 Comm: fpu_denormal Not tainted 5.8.0-rc2-gcc-8.2.0-00092-g4ec7aaab0828 #192 NIP: c0000000000100ec LR: c00000000001b85c CTR: 0000000000000000 REGS: c000001dd818f770 TRAP: 1500 Not tainted (5.8.0-rc2-gcc-8.2.0-00092-g4ec7aaab0828) MSR: 900000000290b033 <SF,HV,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24002884 XER: 20000000 CFAR: c00000000001005c IRQMASK: 1 GPR00: c00000000001c4c8 c000001dd818fa00 c00000000171c200 c000001dd8101570 GPR04: 0000000000000000 c000001dd818fe90 c000001dd8101590 000000000000001d GPR08: 0000000000000010 0000000000002000 c000001dd818fe90 fffffffffc48ac60 GPR12: 0000000000002200 c000001ffff4f480 0000000000000000 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 GPR20: 0000000000000000 00007fffab225b40 0000000000000001 c000000001757168 GPR24: c000001dd8101570 c0000018027b00f0 c000001dd8101570 c000000001496098 GPR28: c00000000174ad05 c000001dd8100000 c000001dd8100000 c000001dd8100000 NIP save_fpu+0xa8/0x2ac LR __giveup_fpu+0x2c/0xd0 Call Trace: 0xc000001dd818fa80 (unreliable) giveup_all+0x118/0x120 __switch_to+0x124/0x6c0 __schedule+0x390/0xaf0 do_task_dead+0x70/0x80 do_exit+0x8fc/0xe10 do_group_exit+0x64/0xd0 sys_exit_group+0x24/0x30 system_call_exception+0x164/0x270 system_call_common+0xf0/0x278 Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Split out of fix patch, add oops log] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200708074942.1713396-1-npiggin@gmail.com
-
Nicholas Piggin authored
Returning from an interrupt or syscall to a signal handler currently begins execution directly at the handler's entry point, with LR set to the address of the sigreturn trampoline. When the signal handler function returns, it runs the trampoline. It looks like this: # interrupt at user address xyz # kernel stuff... signal is raised rfid # void handler(int sig) addis 2,12,.TOC.-.LCF0@ha addi 2,2,.TOC.-.LCF0@l mflr 0 std 0,16(1) stdu 1,-96(1) # handler stuff ld 0,16(1) mtlr 0 blr # __kernel_sigtramp_rt64 addi r1,r1,__SIGNAL_FRAMESIZE li r0,__NR_rt_sigreturn sc # kernel executes rt_sigreturn rfid # back to user address xyz Note the blr with no matching bl. This can corrupt the return predictor. Solve this by instead resuming execution at the signal trampoline which then calls the signal handler. qtrace-tools link_stack checker confirms the entire user/kernel/vdso cycle is balanced after this patch, whereas it's not upstream. Alan confirms the dwarf unwind info still looks good. gdb still recognises the signal frame and can step into parent frames if it break inside a signal handler. Performance is pretty noisy, not a very significant change on a POWER9 here, but branch misses are consistently a lot lower on a microbenchmark: Performance counter stats for './signal': 13,085.72 msec task-clock # 1.000 CPUs utilized 45,024,760,101 cycles # 3.441 GHz 65,102,895,542 instructions # 1.45 insn per cycle 11,271,673,787 branches # 861.372 M/sec 59,468,979 branch-misses # 0.53% of all branches 12,989.09 msec task-clock # 1.000 CPUs utilized 44,692,719,559 cycles # 3.441 GHz 65,109,984,964 instructions # 1.46 insn per cycle 11,282,136,057 branches # 868.585 M/sec 39,786,942 branch-misses # 0.35% of all branches Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200511101952.1463138-1-npiggin@gmail.com
-
Arnd Bergmann authored
The kernel test robot pointed out a slightly different error message after recent commit 5456ffde ("powerpc/spufs: simplify spufs core dumping") to spufs for a configuration that never worked: powerpc64-linux-ld: arch/powerpc/platforms/cell/spufs/file.o: in function `.spufs_proxydma_info_dump': >> file.c:(.text+0x4c68): undefined reference to `.dump_emit' powerpc64-linux-ld: arch/powerpc/platforms/cell/spufs/file.o: in function `.spufs_dma_info_dump': file.c:(.text+0x4d70): undefined reference to `.dump_emit' powerpc64-linux-ld: arch/powerpc/platforms/cell/spufs/file.o: in function `.spufs_wbox_info_dump': file.c:(.text+0x4df4): undefined reference to `.dump_emit' Add a Kconfig dependency to prevent this from happening again. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200706132302.3885935-1-arnd@arndb.de
-
Oliver O'Halloran authored
pnv_ioda_setup_bus_dma() is only used when a passed through PE is returned to the host. If the kernel is built without IOMMU support this is dead code. Move it under the #ifdef with the rest of the IOMMU API support. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200705133557.443607-2-oohall@gmail.com
-
Oliver O'Halloran authored
The kernel test robot noticed these are non-static which causes Clang to print some warnings. These are called via ppc_md function pointers so there's no need for them to be non-static. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200705133557.443607-1-oohall@gmail.com
-
Abhishek Goel authored
Commit 1961acad removes usage of function "validate_dt_prop_sizes". This patch removes this unused function. Signed-off-by: Abhishek Goel <huntbag@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200706053258.121475-1-huntbag@linux.vnet.ibm.com
-
Srikar Dronamraju authored
Unlike drivers/base/cacheinfo, powerpc cacheinfo code is not exposing shared_cpu_list under /sys/devices/system/cpu/cpu<n>/cache/index<m> Add shared_cpu_list to per cpu per index directory to maintain parity with x86. Some scripts (example: mmtests https://github.com/gormanm/mmtests) seem to be looking for shared_cpu_list instead of shared_cpu_map. Before this patch: # ls /sys/devices/system/cpu0/cache/index1 coherency_line_size number_of_sets size ways_of_associativity level shared_cpu_map type # cat /sys/devices/system/cpu0/cache/index1/shared_cpu_map 00ff # After this patch: # ls /sys/devices/system/cpu0/cache/index1 coherency_line_size number_of_sets shared_cpu_map type level shared_cpu_list size ways_of_associativity # cat /sys/devices/system/cpu0/cache/index1/shared_cpu_map 00ff # cat /sys/devices/system/cpu0/cache/index1/shared_cpu_list 0-7 # Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200629103703.4538-4-srikar@linux.vnet.ibm.com
-
Srikar Dronamraju authored
In anticipation of implementing shared_cpu_list, move code under shared_cpu_map_show() to a common function. No functional changes. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200629103703.4538-3-srikar@linux.vnet.ibm.com
-
Srikar Dronamraju authored
Tejun Heo had modified shared_cpu_map_show() to use scnprintf instead of cpumap_print during support for *pb[l] format. Refer commit 0c118b7b ("powerpc: use %*pb[l] to print bitmaps including cpumasks and nodemasks"). cpumap_print_to_pagebuf() is a standard function to print cpumap. With commit 9cf79d11 ("bitmap: remove explicit newline handling using scnprintf format string"), there is no need to print explicit newline and trailing null character. cpumap_print_to_pagebuf() internally uses scnprintf(). Hence replace scnprintf() with cpumap_print_to_pagebuf(). Note: shared_cpu_map_show() in drivers/base/cacheinfo.c already uses cpumap_print_to_pagebuf(). Before this patch: # cat /sys/devices/system/cpu0/cache/index1/shared_cpu_map 00ff # (Notice the extra blank line). After this patch: # cat /sys/devices/system/cpu0/cache/index1/shared_cpu_map 00ff # Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200629103703.4538-2-srikar@linux.vnet.ibm.com
-
Philippe Bergheaud authored
Some opencapi FPGA images allow to control if the FPGA should be reloaded on the next adapter reset. If it is supported, the image specifies it through a Vendor Specific DVSEC in the config space of function 0. Signed-off-by: Philippe Bergheaud <felix@linux.ibm.com> Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com> Reviewed-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200619140439.153962-1-fbarrat@linux.ibm.com
-
Sam Bobroff authored
I'm sorry to say I can no longer maintain this position. Signed-off-by: Sam Bobroff <sbobroff@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/aec7d729c28e35c7fa9969ec50229080c771195c.1593471043.git.sbobroff@linux.ibm.com
-
Anton Blanchard authored
I'm seeing RCU warnings when exiting xmon. xmon resets the NMI watchdog, but does nothing with the RCU stall or soft lockup watchdogs. Add a helper function that handles all three. Signed-off-by: Anton Blanchard <anton@ozlabs.org> Acked-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200630100218.62a3c3fb@kryten.localdomain
-
- 08 Jul, 2020 1 commit
-
-
Desnes A. Nunes do Rosario authored
An extra count on ebb_state.stats.pmc_count[PMC_INDEX(pmc)] is being per- formed when count_pmc() is used to reset PMCs on a few selftests. This extra pmc_count can occasionally invalidate results, such as the ones from cycles_test shown hereafter. The ebb_check_count() failed with an above the upper limit error due to the extra value on ebb_state.stats.pmc_count. Furthermore, this extra count is also indicated by extra PMC1 trace_log on the output of the cycle test (as well as on pmc56_overflow_test): ========== ... [21]: counter = 8 [22]: register SPRN_MMCR0 = 0x0000000080000080 [23]: register SPRN_PMC1 = 0x0000000080000004 [24]: counter = 9 [25]: register SPRN_MMCR0 = 0x0000000080000080 [26]: register SPRN_PMC1 = 0x0000000080000004 [27]: counter = 10 [28]: register SPRN_MMCR0 = 0x0000000080000080 [29]: register SPRN_PMC1 = 0x0000000080000004 >> [30]: register SPRN_PMC1 = 0x000000004000051e PMC1 count (0x280000546) above upper limit 0x2800003e8 (+0x15e) [FAIL] Test FAILED on line 52 failure: cycles ========== Signed-off-by: Desnes A. Nunes do Rosario <desnesn@linux.ibm.com> Tested-by: Sachin Sant <sachinp@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200626164737.21943-1-desnesn@linux.ibm.com
-
- 06 Jul, 2020 1 commit
-
-
Bin Meng authored
Drop CONFIG_MTD_M25P80 that was removed in commit b35b9a10 ("mtd: spi-nor: Move m25p80 code in spi-nor.c") Signed-off-by: Bin Meng <bin.meng@windriver.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1588394694-517-1-git-send-email-bmeng.cn@gmail.com
-
- 30 Jun, 2020 5 commits
-
-
Michael Ellerman authored
With CONFIG_OF_ALL_DTBS=y, as set by eg. allmodconfig, we see lots of warnings about our dts files, such as: arch/powerpc/boot/dts/glacier.dts:492.26-532.5: Warning (pci_bridge): /plb/pciex@d00000000: node name is not "pci" or "pcie" The node name should not particularly matter, it's just a name, and AFAICS there's no kernel code that cares whether nodes are *named* "pciex" or "pcie". So shutup these warnings by converting to the name dtc wants. As always there's some risk this could break something obscure that does rely on the name, in which case we can revert. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200623130320.405852-1-mpe@ellerman.id.au
-
Nathan Chancellor authored
Clang warns: arch/powerpc/boot/main.c:107:18: warning: array comparison always evaluates to a constant [-Wtautological-compare] if (_initrd_end > _initrd_start) { ^ arch/powerpc/boot/main.c:155:20: warning: array comparison always evaluates to a constant [-Wtautological-compare] if (_esm_blob_end <= _esm_blob_start) ^ 2 warnings generated. These are not true arrays, they are linker defined symbols, which are just addresses. Using the address of operator silences the warning and does not change the resulting assembly with either clang/ld.lld or gcc/ld (tested with diff + objdump -Dr). Reported-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Tested-by: Geoff Levand <geoff@infradead.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200624035920.835571-1-natechancellor@gmail.com
-
Sandipan Das authored
Apart from read and write access, memory protection keys can also be used for restricting execute permission of pages on powerpc. This adds a test to verify if the feature works as expected. Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200604125610.649668-4-sandipan@linux.ibm.com
-
Sandipan Das authored
This moves a function to test if the MMU is in Hash mode under the generic test utilities. Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200604125610.649668-3-sandipan@linux.ibm.com
-
Sandipan Das authored
The Power ISA mandates that all writes to the Authority Mask Register (AMR) must always be preceded as well as succeeded by a context synchronizing instruction. This makes sure that the tests follow this requirement when attempting to update a pkey's access rights. Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200604125610.649668-2-sandipan@linux.ibm.com
-
- 22 Jun, 2020 17 commits
-
-
Christophe Leroy authored
Move ptep_get() close to pte_update(), in an ifdef section already dedicated to powerpc 8xx. This section contains explanation about the layout of page table entries. Also modify it to return 4 times the pte value instead of padding with zeroes. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9f2df6621fcaf9eba15fadc61c169d0c8e2fb849.1592481938.git.christophe.leroy@csgroup.eu
-
Aneesh Kumar K.V authored
With hash translation, the hypervisor can hint the LPAR about 16GB contiguous range via ibm,expected#pages. The kernel marks the range specified in the device tree as reserved. Avoid doing this when using radix translation. Radix translation only supports 1G gigantic hugepage and kernel can do the 1G gigantic hugepage allocation via early memblock reservation. This can be done because with radix translation pages are not required to be contiguous on the host. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200622064019.16682-1-aneesh.kumar@linux.ibm.com
-
Imre Kaloz authored
This patch splits up the compile flags between ppc40x and ppc44x. Signed-off-by: John Crispin <john@phrozen.org> Signed-off-by: Imre Kaloz <kaloz@openwrt.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1482393968-60623-1-git-send-email-john@phrozen.org
-
Christophe Leroy authored
FIX_EARLY_DEBUG_BASE reserves a 128k area for debuging. When page size is 256k, the calculation results in a 0 number of pages, leading to the following failure: CC arch/powerpc/kernel/asm-offsets.s In file included from ./arch/powerpc/include/asm/nohash/32/pgtable.h:77:0, from ./arch/powerpc/include/asm/nohash/pgtable.h:8, from ./arch/powerpc/include/asm/pgtable.h:20, from ./include/linux/pgtable.h:6, from ./arch/powerpc/include/asm/kup.h:42, from ./arch/powerpc/include/asm/uaccess.h:9, from ./include/linux/uaccess.h:11, from ./include/linux/crypto.h:21, from ./include/crypto/hash.h:11, from ./include/linux/uio.h:10, from ./include/linux/socket.h:8, from ./include/linux/compat.h:15, from arch/powerpc/kernel/asm-offsets.c:14: ./arch/powerpc/include/asm/fixmap.h:75:2: error: overflow in enumeration values __end_of_permanent_fixed_addresses, ^ make[2]: *** [arch/powerpc/kernel/asm-offsets.s] Error 1 Ensure the debug area is at least one page. Fixes: b8e8efaa ("powerpc: reserve fixmap entries for early debug") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ca8c9f8249f523b1fab873e67b81b11989d46553.1592207216.git.christophe.leroy@csgroup.eu
-
Jordan Niethe authored
Extend the alignment handler selftest to exercise prefixed load store instructions. Add tests for prefixed VSX, floating point and integer instructions. Skip prefix tests if ISA version does not support prefixed instructions. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Tested-by: Alistair Popple <alistair@popple.id.au> [mpe: Fixup PPC_FEATURE2_ARCH_3_1 naming as noted by Alistair] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200520021103.19798-2-jniethe5@gmail.com
-
Jordan Niethe authored
The alignment handler selftest needs cache-inhibited memory and currently /dev/fb0 is relied on to provided this. This prevents running the test on systems without /dev/fb0 (e.g., mambo). Read the commandline arguments for an optional path to be used instead, as well as an optional offset to be for mmaping this path. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Tested-by: Alistair Popple <alistair@popple.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200520021103.19798-1-jniethe5@gmail.com
-
Alexey Kardashevskiy authored
The iommu_table_ops::xchg_no_kill() callback updates TCE. It is quite possible that not entire table is allocated if it is huge and multilevel so xchg may also allocate subtables. If failed, it returns H_HARDWARE for failed allocation and H_TOO_HARD if it needs it but cannot do because the alloc parameter is "false" (set when called with MMU=off to force retry with MMU=on). The problem is that having separate errors only matters in real mode (MMU=off) but the only caller with alloc="false" does not check the exact error code and simply returns H_TOO_HARD; and for every other mode alloc is "true". Also, the function is also called from the ioctl() handler of the VFIO SPAPR TCE IOMMU subdriver which does not expect hypervisor error codes (H_xxx) and will expose them to the userspace. This converts wrong error codes to -ENOMEM. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200617003835.48831-1-aik@ozlabs.ru
-
Satheesh Rajendran authored
Argument "align" in alloc_shared_lppaca() was unused inside the function. Let's drop it and update code comment for page alignment. Signed-off-by: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com> Reviewed-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Reviewed-by: Laurent Dufour <ldufour@linux.ibm.com> [mpe: Massage comment wording/formatting] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200612142953.135408-1-sathnaga@linux.vnet.ibm.com
-
Christophe Leroy authored
H_SUCCESS is only defined when CONFIG_PPC_PSERIES is defined. != H_SUCCESS means != 0. Modify the test accordingly. Fixes: 65e701b2 ("powerpc/ptdump: drop non vital #ifdefs") Cc: stable@vger.kernel.org Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/795158fc1d2b3dff3bf7347881947a887ea9391a.1592227105.git.christophe.leroy@csgroup.eu
-
Joe Perches authored
IS_ENABLED() matches names exactly, so the missing "CONFIG_" prefix means this code would never be built. Also fixes a missing newline in pr_warn(). Fixes: 970d54f9 ("powerpc/book3s64/hash: Disable 16M linear mapping size if not aligned") Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/202006050717.A2F9809E@keescook
-
Alexey Kardashevskiy authored
xive_native_provision_pages() allocates memory and passes the pointer to OPAL so kmemleak cannot find the pointer usage in the kernel memory and produces a false positive report (below) (even if the kernel did scan OPAL memory, it is unable to deal with __pa() addresses anyway). This silences the warning. unreferenced object 0xc000200350c40000 (size 65536): comm "qemu-system-ppc", pid 2725, jiffies 4294946414 (age 70776.530s) hex dump (first 32 bytes): 02 00 00 00 50 00 00 00 00 00 00 00 00 00 00 00 ....P........... 01 00 08 07 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<0000000081ff046c>] xive_native_alloc_vp_block+0x120/0x250 [<00000000d555d524>] kvmppc_xive_compute_vp_id+0x248/0x350 [kvm] [<00000000d69b9c9f>] kvmppc_xive_connect_vcpu+0xc0/0x520 [kvm] [<000000006acbc81c>] kvm_arch_vcpu_ioctl+0x308/0x580 [kvm] [<0000000089c69580>] kvm_vcpu_ioctl+0x19c/0xae0 [kvm] [<00000000902ae91e>] ksys_ioctl+0x184/0x1b0 [<00000000f3e68bd7>] sys_ioctl+0x48/0xb0 [<0000000001b2c127>] system_call_exception+0x124/0x1f0 [<00000000d2b2ee40>] system_call_common+0xe8/0x214 Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200612043303.84894-1-aik@ozlabs.ru
-
Chris Packham authored
Regenerate defconfigs to remove CONFIG_CMDLINE_BOOL and the default CONFIG_CMDLINE where applicable. Signed-off-by: Chris Packham <chris.packham@alliedtelesis.co.nz> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200611224220.25066-3-chris.packham@alliedtelesis.co.nz
-
Chris Packham authored
Since commit cbe46bd4 ("powerpc: remove CONFIG_CMDLINE #ifdef mess") CONFIG_CMDLINE has always had a value regardless of CONFIG_CMDLINE_BOOL. For example: $ make ARCH=powerpc defconfig $ cat .config # CONFIG_CMDLINE_BOOL is not set CONFIG_CMDLINE="" When enabling CONFIG_CMDLINE_BOOL this value is kept making the 'default "..." if CONFIG_CMDLINE_BOOL' ineffective. $ ./scripts/config --enable CONFIG_CMDLINE_BOOL $ cat .config CONFIG_CMDLINE_BOOL=y CONFIG_CMDLINE="" Remove CONFIG_CMDLINE_BOOL and the inaccessible default. Signed-off-by: Chris Packham <chris.packham@alliedtelesis.co.nz> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200611224220.25066-2-chris.packham@alliedtelesis.co.nz
-
Murilo Opsfelder Araujo authored
Macro ISA_V3_1 was defined but never used. Use it instead of literal. Signed-off-by: Murilo Opsfelder Araujo <muriloo@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200610215114.167544-4-muriloo@linux.ibm.com
-
Murilo Opsfelder Araujo authored
Macro ISA_V3_0B was defined but never used. Use it instead of literal. Signed-off-by: Murilo Opsfelder Araujo <muriloo@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200610215114.167544-3-muriloo@linux.ibm.com
-
Murilo Opsfelder Araujo authored
Macro ISA_V2_07B is defined but not used anywhere else in the code. Signed-off-by: Murilo Opsfelder Araujo <muriloo@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200610215114.167544-2-muriloo@linux.ibm.com
-
Nicholas Piggin authored
blrl is not recommended to use as an indirect function call, as it may corrupt the link stack predictor. This is not a performance critical path but this should be fixed for consistency. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200611121119.1015740-1-npiggin@gmail.com
-