- 31 Oct, 2018 8 commits
-
-
Naveen N. Rao authored
We are using 'dscr_insn' as a label in inline asm to identify if a SIGILL was generated by the mtspr instruction at that point. However, with inline assembly, the compiler is still free to duplicate the asm statement for optimization purposes, which results in the label being defined twice with the error: /tmp/ccerQCql.s:874: Error: symbol `dscr_insn' is already defined With different compiler versions, we may also see: /tmp/ccJzLDlN.o:(.toc+0x0): undefined reference to `dscr_insn' Remove the use of the label in the inline assembly. Instead, just look for the offending instruction in the signal handler. Fixes: d2bf7932 ("selftests/powerpc: Add test to verify rfi flush across a system call") Reported-by: Breno Leitao <leitao@debian.org> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Tested-by: Breno Leitao <leitao@debian.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
Use TEST_GEN_PROGS and don't redefine all, this makes the out-of-tree build work. We need to move the extra dependencies below the include of lib.mk, because it adds the $(OUTPUT) prefix if it's defined. We can also drop the clean rule, lib.mk does it for us. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
For the out-of-tree build to work we need to tell switch_endian_test to look for check-reversed.S in $(OUTPUT). Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Joel Stanley authored
When running the ebb tests after building on a ppc64le Ubuntu machine: $ pmu/ebb/reg_access_test: error while loading shared libraries: R_PPC64_ADDR16_HI reloc at 0x000000013a965130 for symbol `' out of range This is because the Ubuntu toolchain builds has PIE enabled by default. Change it to be always off instead. Signed-off-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Joel Stanley authored
We should use TEST_GEN_PROGS, not TEST_PROGS. That tells the selftests makefile (lib.mk) that those tests are generated (built), and so it adds the $(OUTPUT) prefix for us, making the out-of-tree build work correctly. It also means we don't need our own clean rule, lib.mk does it. We also have to update the signal_tm rule to use $(OUTPUT). Signed-off-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Joel Stanley authored
We should use TEST_GEN_PROGS, not TEST_PROGS. That tells the selftests makefile (lib.mk) that those tests are generated (built), and so it adds the $(OUTPUT) prefix for us, making the out-of-tree build work correctly. It also means we don't need our own clean rule, lib.mk does it. We also have to update the ptrace-pkey and core-pkey rules to use $(OUTPUT). Signed-off-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Joel Stanley authored
When building with clang (8 trunk, 7.0 release) the frame size limit is hit: arch/powerpc/xmon/xmon.c:452:12: warning: stack frame size of 2576 bytes in function 'xmon_core' [-Wframe-larger-than=] Some investigation by Naveen indicates this is due to clang saving the addresses to printf format strings on the stack. While this issue is investigated, bump up the frame size limit for xmon when building with clang. Link: https://github.com/ClangBuiltLinux/linux/issues/252Signed-off-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Joel Stanley authored
typing 'make' inside tools/testing/selftests/powerpc gave a build warning: BUILD_TARGET=tools/testing/selftests/powerpc/security; mkdir -p $BUILD_TARGET; make OUTPUT=$BUILD_TARGET -k -C security all make[1]: Entering directory 'tools/testing/selftests/powerpc/security' ../../lib.mk:20: ../../../../scripts/subarch.include: No such file or directory make[1]: *** No rule to make target '../../../../scripts/subarch.include'. make[1]: Failed to remake makefile '../../../../scripts/subarch.include'. The build is one level deeper than lib.mk thinks it is. Set top_srcdir to set things straight. Note that the test program is still built. Signed-off-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
- 30 Oct, 2018 1 commit
-
-
Naveen N. Rao authored
When running the rfi_flush test, if the system is loaded, we see two issues: 1. The L1d misses when rfi_flush is disabled increase significantly due to other workloads interfering with the cache. 2. The L1d misses when rfi_flush is enabled sometimes goes slightly below the expected number of misses. To address these, let's relax the expected number of L1d misses: 1. When rfi_flush is disabled, we allow upto half the expected number of the misses for when rfi_flush is enabled. 2. When rfi_flush is enabled, we allow ~1% lower number of cache misses. Reported-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Tested-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
- 29 Oct, 2018 1 commit
-
-
https://git.kernel.org/pub/scm/linux/kernel/git/scottwood/linuxMichael Ellerman authored
Updates from Scott: "This contains a couple device tree updates, and a fix for a missing prototype warning."
-
- 26 Oct, 2018 10 commits
-
-
Felipe Rechia authored
Fix a bug introduced by the creation of flush_all_to_thread() for processors that have SPE (Signal Processing Engine) and use it to compute floating-point operations. >From userspace perspective, the problem was seen in attempts of computing floating-point operations which should generate exceptions. For example: fork(); float x = 0.0 / 0.0; isnan(x); // forked process returns False (should be True) The operation above also should always cause the SPEFSCR FINV bit to be set. However, the SPE floating-point exceptions were turned off after a fork(). Kernel versions prior to the bug used flush_spe_to_thread(), which first saves SPEFSCR register values in tsk->thread and then calls giveup_spe(tsk). After commit 579e633e, the save_all() function was called first to giveup_spe(), and then the SPEFSCR register values were saved in tsk->thread. This would save the SPEFSCR register values after disabling SPE for that thread, causing the bug described above. Fixes 579e633e ("powerpc: create flush_all_to_thread()") Signed-off-by: Felipe Rechia <felipe.rechia@datacom.com.br> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Tyrel Datwyler authored
Build error is encountered when inlcuding <asm/rtas.h> if no explicit or implicit include of cpumask.h exists in the including file. In file included from arch/powerpc/platforms/pseries/hotplug-pci.c:3:0: ./arch/powerpc/include/asm/rtas.h:360:34: error: unknown type name 'cpumask_var_t' extern int rtas_online_cpus_mask(cpumask_var_t cpus); ^ ./arch/powerpc/include/asm/rtas.h:361:35: error: unknown type name 'cpumask_var_t' extern int rtas_offline_cpus_mask(cpumask_var_t cpus); Fixes: 120496ac ("powerpc: Bring all threads online prior to migration/hibernation") Signed-off-by: Tyrel Datwyler <tyreld@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Breno Leitao authored
Test ptrace-tm-spd-gpr fails on current kernel (4.19) due to a segmentation fault that happens on the child process prior to setting cptr[2] = 1. This causes the parent process to wait forever at 'while (!pptr[2])' and the test to be killed by the test harness framework by timeout, thus, failing. The segmentation fault happens because of a inline assembly being generated as: 0x10000355c <tm_spd_gpr+492> lfs f0, 0(0) This is reading memory position 0x0 and causing the segmentation fault. This code is being generated by ASM_LOAD_FPR_SINGLE_PRECISION(flt_4), where flt_4 is passed to the inline assembly block as: [flt_4] "r" (&d) Since the inline assembly 'r' constraint means any GPR, gpr0 is being chosen, thus causing this issue when issuing a Load Floating-Point Single instruction. This patch simply changes the constraint to 'b', which specify that this register will be used as base, and r0 is not allowed to be used, avoiding this issue. Other than that, removing flt_2 register from the input operands, since it is not used by the inline assembly code at all. Cc: stable@vger.kernel.org Signed-off-by: Breno Leitao <leitao@debian.org> Acked-by: Segher Boessenkool <segher@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Paul Mackerras authored
This changes the KVM code that emulates the decrementer function to do the conversion of decrementer values to time intervals in nanoseconds by calling the tb_to_ns() function exported by the powerpc timer code, in preference to open-coded arithmetic using values from the decrementer_clockevent struct. Similarly, the HV-KVM code that did the same conversion using arithmetic on tb_ticks_per_sec also now uses tb_to_ns(). Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Aravinda Prasad authored
This patch exports the maximum possible amount of memory configured on the system via /proc/powerpc/lparcfg. Signed-off-by: Aravinda Prasad <aravinda@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
The 8xx TLB miss routines are patched when (de)activating perf counters. This patch uses the new patch_site functionality in order to get a better code readability and avoid a label mess when dumping the code with 'objdump -d' Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
The 8xx TLB miss routines are patched at startup at several places. This patch uses the new patch_site functionality in order to get a better code readability and avoid a label mess when dumping the code with 'objdump -d' Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
This patch adds a helper to get the address of a patch_site. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [mpe: Call it "patch site" addr] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
This reverts commit 4f94b2c7. That commit was buggy, as it used rlwinm instead of rlwimi. Instead of fixing that bug, we revert the previous commit in order to reduce the dependency between L1 entries and L2 entries Fixes: 4f94b2c7 ("powerpc/8xx: Use L1 entry APG to handle _PAGE_ACCESSED for CONFIG_SWAP") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
This reverts commit d8a2fe29. That commit, by me, fixed the out of tree build errors by causing some of the tests not to build at all.
-
- 23 Oct, 2018 2 commits
-
-
Christophe Leroy authored
arch/powerpc/mm/8xx_mmu.c:174:6: error: no previous prototype for ‘set_context’ [-Werror=missing-prototypes] void set_context(unsigned long id, pgd_t *pgd) Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <oss@buserror.net>
-
Christophe Leroy authored
The MPC885 has SEC engine version 1.2 with the following details: - Number of Crypto channels: 1 - Exec Units: DEU, MDEU and AESU - Available descriptors: 00010, 00100, 00110, 01000, 11000, 11010 It is also supposed to have descriptor 00000, but it doesn't work properly so we keep it out for the moment. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Scott Wood <oss@buserror.net>
-
- 21 Oct, 2018 2 commits
-
-
Christophe Leroy authored
mpic_get_primary_version() is not defined when not using MPIC. The compile error log like: arch/powerpc/sysdev/built-in.o: In function `fsl_of_msi_probe': fsl_msi.c:(.text+0x150c): undefined reference to `fsl_mpic_primary_get_version' Signed-off-by: Jia Hongtao <hongtao.jia@freescale.com> Signed-off-by: Scott Wood <scottwood@freescale.com> Reported-by: Radu Rendec <radu.rendec@gmail.com> Fixes: 807d38b7 ("powerpc/mpic: Add get_version API both for internal and external use") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
Recently in commit 7241d26e ("powerpc/64: properly initialise the stackprotector canary on SMP.") we fixed a crash with stack protector on SMP by initialising the stack canary in cpu_idle_thread_init(). But this can also causes crashes, when a CPU comes back online after being offline: Kernel panic - not syncing: stack-protector: Kernel stack is corrupted in: pnv_smp_cpu_kill_self+0x2a0/0x2b0 CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.19.0-rc3-gcc-7.3.1-00168-g4ffe713b #94 Call Trace: dump_stack+0xb0/0xf4 (unreliable) panic+0x144/0x328 __stack_chk_fail+0x2c/0x30 pnv_smp_cpu_kill_self+0x2a0/0x2b0 cpu_die+0x48/0x70 arch_cpu_idle_dead+0x20/0x40 do_idle+0x274/0x390 cpu_startup_entry+0x38/0x50 start_secondary+0x5e4/0x600 start_secondary_prolog+0x10/0x14 Looking at the stack we see that the canary value in the stack frame doesn't match the canary in the task/paca. That is because we have reinitialised the task/paca value, but then the CPU coming online has returned into a function using the old canary value. That causes the comparison to fail. Instead we can call boot_init_stack_canary() from start_secondary() which never returns. This is essentially what the generic code does in cpu_startup_entry() under #ifdef X86, we should make that non-x86 specific in a future patch. Fixes: 7241d26e ("powerpc/64: properly initialise the stackprotector canary on SMP.") Reported-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
-
- 20 Oct, 2018 16 commits
-
-
Camelia Groza authored
According to the T2080RDB schematics, for the CS4315 PHY, the XFI 1 lane is connected to SFP 2 and the XFI 2 lane is connected to SFP 1. Change the device tree to reflect the correct PHY order and port association. Signed-off-by: Camelia Groza <camelia.groza@nxp.com> Signed-off-by: Scott Wood <oss@buserror.net>
-
Christophe Leroy authored
commit b96672dd ("powerpc: Machine check interrupt is a non- maskable interrupt") added a call to nmi_enter() at the beginning of machine check restart exception handler. Due to that, in_interrupt() always returns true regardless of the state before entering the exception, and die() panics even when the system was not already in interrupt. This patch calls nmi_exit() before calling die() in order to restore the interrupt state we had before calling nmi_enter() Fixes: b96672dd ("powerpc: Machine check interrupt is a non-maskable interrupt") Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Nicholas Piggin authored
The recent module relocation overflow crash demonstrated that we have no range checking on REL32 relative relocations. This patch implements a basic check, the same kernel that previously oopsed and rebooted now continues with some of these errors when loading the module: module_64: x_tables: REL32 527703503449812 out of range! Possibly other relocations (ADDR32, REL16, TOC16, etc.) should also have overflow checks. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Nicholas Piggin authored
Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
This tests that a bctr (Branch to counter and link), ie. a function call, to a wildly out-of-bounds address is handled correctly. Some old kernel versions didn't handle it correctly, see eg: "powerpc/slb: Force a full SLB flush when we insert for a bad EA" https://lists.ozlabs.org/pipermail/linuxppc-dev/2017-April/157397.htmlSigned-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
When we're running on Book3S with the Radix MMU enabled the page table dump currently prints the wrong addresses because it uses the wrong start address. Fix it to use PAGE_OFFSET rather than KERN_VIRT_START. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
At boot we print the ranges we've mapped for the linear mapping and what page size we've used. Also track whether the range is mapped executable or not and display that as well. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
If we look closely at the logic in create_physical_mapping(), when we're doing STRICT_KERNEL_RWX, we do the following steps: - determine the gap from where we are to the end of the range - choose an appropriate mapping_size based on the gap - check if that mapping_size would overlap the __init_begin boundary, and if not choose an appropriate mapping_size We can simplify the logic by taking the __init_begin boundary into account when we calculate the initial gap. So add a next_boundary() function which tells us what the next boundary is, either the __init_begin boundary or end. In future we can add more boundaries. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
When we have CONFIG_STRICT_KERNEL_RWX enabled, we want to split the linear mapping at the text/data boundary so we can map the kernel text read only. The current logic uses a goto inside the for loop, which works, but is hard to reason about. When we hit the goto retry case we set max_mapping_size to PMD_SIZE and go back to the start. Setting max_mapping_size means we skip the PUD case and go to the PMD case. We know we will pass the alignment and gap checks because the only reason we are there is we hit the goto retry, and that is guarded by mapping_size == PUD_SIZE, which means addr is PUD aligned and gap is greater or equal to PUD_SIZE. So the only part of the check that can fail is the mmu_psize_defs check for the 2M page size. If we just duplicate that check we can avoid the goto, and we get the same result. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
When we have CONFIG_STRICT_KERNEL_RWX enabled, we want to split the linear mapping at the text/data boundary so we can map the kernel text read only. Currently we always use a small page at the text/data boundary, even when that's not necessary: Mapped 0x0000000000000000-0x0000000000e00000 with 2.00 MiB pages Mapped 0x0000000000e00000-0x0000000001000000 with 64.0 KiB pages Mapped 0x0000000001000000-0x0000000040000000 with 2.00 MiB pages This is because the check that the mapping crosses the __init_begin boundary is too strict, it also returns true when we map exactly up to the boundary. So fix it to check that the mapping would actually map past __init_begin, and with that we see: Mapped 0x0000000000000000-0x0000000040000000 with 2.00 MiB pages Mapped 0x0000000040000000-0x0000000100000000 with 1.00 GiB pages Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
When we have CONFIG_STRICT_KERNEL_RWX enabled, we want to split the linear mapping at the text/data boundary so we can map the kernel text read only. But the current logic uses small pages for the entire text section, regardless of whether a larger page size would fit. eg. with the boundary at 16M we could use 2M pages, but instead we use 64K pages up to the 16M boundary: Mapped 0x0000000000000000-0x0000000001000000 with 64.0 KiB pages Mapped 0x0000000001000000-0x0000000040000000 with 2.00 MiB pages Mapped 0x0000000040000000-0x0000000100000000 with 1.00 GiB pages This is because the test is checking if addr is < __init_begin and addr + mapping_size is >= _stext. But that is true for all pages between _stext and __init_begin. Instead what we want to check is if we are crossing the text/data boundary, which is at __init_begin. With that fixed we see: Mapped 0x0000000000000000-0x0000000000e00000 with 2.00 MiB pages Mapped 0x0000000000e00000-0x0000000001000000 with 64.0 KiB pages Mapped 0x0000000001000000-0x0000000040000000 with 2.00 MiB pages Mapped 0x0000000040000000-0x0000000100000000 with 1.00 GiB pages ie. we're correctly using 2MB pages below __init_begin, but we still drop down to 64K pages unnecessarily at the boundary. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
When we have CONFIG_STRICT_KERNEL_RWX enabled, we try to split the kernel linear (1:1) mapping so that the kernel text is in a separate page to kernel data, so we can mark the former read-only. We could achieve that just by always using 64K pages for the linear mapping, but we try to be smarter. Instead we use huge pages when possible, and only switch to smaller pages when necessary. However we have an off-by-one bug in that logic, which causes us to calculate the wrong boundary between text and data. For example with the end of the kernel text at 16M we see: radix-mmu: Mapped 0x0000000000000000-0x0000000001200000 with 64.0 KiB pages radix-mmu: Mapped 0x0000000001200000-0x0000000040000000 with 2.00 MiB pages radix-mmu: Mapped 0x0000000040000000-0x0000000100000000 with 1.00 GiB pages ie. we mapped from 0 to 18M with 64K pages, even though the boundary between text and data is at 16M. With the fix we see we're correctly hitting the 16M boundary: radix-mmu: Mapped 0x0000000000000000-0x0000000001000000 with 64.0 KiB pages radix-mmu: Mapped 0x0000000001000000-0x0000000040000000 with 2.00 MiB pages radix-mmu: Mapped 0x0000000040000000-0x0000000100000000 with 1.00 GiB pages Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Naveen N. Rao authored
Currently, we expect to be able to reach ftrace_caller() from all ftrace-enabled functions through a single relative branch. With large kernel configs, we see functions outside of 32MB of ftrace_caller() causing ftrace_init() to bail. In such configurations, gcc/ld emits two types of trampolines for mcount(): 1. A long_branch, which has a single branch to mcount() for functions that are one hop away from mcount(): c0000000019e8544 <00031b56.long_branch._mcount>: c0000000019e8544: 4a 69 3f ac b c00000000007c4f0 <._mcount> 2. A plt_branch, for functions that are farther away from mcount(): c0000000051f33f8 <0008ba04.plt_branch._mcount>: c0000000051f33f8: 3d 82 ff a4 addis r12,r2,-92 c0000000051f33fc: e9 8c 04 20 ld r12,1056(r12) c0000000051f3400: 7d 89 03 a6 mtctr r12 c0000000051f3404: 4e 80 04 20 bctr We can reuse those trampolines for ftrace if we can have those trampolines go to ftrace_caller() instead. However, with ABIv2, we cannot depend on r2 being valid. As such, we use only the long_branch trampolines by patching those to instead branch to ftrace_caller or ftrace_regs_caller. In addition, we add additional trampolines around .text and .init.text to catch locations that are covered by the plt branches. This allows ftrace to work with most large kernel configurations. For now, we always patch the trampolines to go to ftrace_regs_caller, which is slightly inefficient. This can be optimized further at a later point. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Aneesh Kumar K.V authored
WARNING: CPU: 12 PID: 4322 at /arch/powerpc/mm/pgtable-book3s64.c:76 set_pmd_at+0x4c/0x2b0 Modules linked in: CPU: 12 PID: 4322 Comm: qemu-system-ppc Tainted: G W 4.19.0-rc3-00758-g8f0c636b0542 #36 NIP: c0000000000872fc LR: c000000000484eec CTR: 0000000000000000 REGS: c000003fba876fe0 TRAP: 0700 Tainted: G W (4.19.0-rc3-00758-g8f0c636b0542) MSR: 900000010282b033 <SF,HV,VEC,VSX,EE,FP,ME,IR,DR,RI,LE,TM[E]> CR: 24282884 XER: 00000000 CFAR: c000000000484ee8 IRQMASK: 0 GPR00: c000000000484eec c000003fba877268 c000000001f0ec00 c000003fbd229f80 GPR04: 00007c8fe8e00000 c000003f864c5a38 860300853e0000c0 0000000000000080 GPR08: 0000000080000000 0000000000000001 0401000000000080 0000000000000001 GPR12: 0000000000002000 c000003fffff5400 c000003fce292000 00007c9024570000 GPR16: 0000000000000000 0000000000ffffff 0000000000000001 c000000001885950 GPR20: 0000000000000000 001ffffc0004807c 0000000000000008 c000000001f49d05 GPR24: 00007c8fe8e00000 c0000000020f2468 ffffffffffffffff c000003fcd33b090 GPR28: 00007c8fe8e00000 c000003fbd229f80 c000003f864c5a38 860300853e0000c0 NIP [c0000000000872fc] set_pmd_at+0x4c/0x2b0 LR [c000000000484eec] do_huge_pmd_numa_page+0xb1c/0xc20 Call Trace: [c000003fba877268] [c00000000045931c] mpol_misplaced+0x1bc/0x230 (unreliable) [c000003fba8772c8] [c000000000484eec] do_huge_pmd_numa_page+0xb1c/0xc20 [c000003fba877398] [c00000000040d344] __handle_mm_fault+0x5e4/0x2300 [c000003fba8774d8] [c00000000040f400] handle_mm_fault+0x3a0/0x420 [c000003fba877528] [c0000000003ff6f4] __get_user_pages+0x2e4/0x560 [c000003fba877628] [c000000000400314] get_user_pages_unlocked+0x104/0x2a0 [c000003fba8776c8] [c000000000118f44] __gfn_to_pfn_memslot+0x284/0x6a0 [c000003fba877748] [c0000000001463a0] kvmppc_book3s_radix_page_fault+0x360/0x12d0 [c000003fba877838] [c000000000142228] kvmppc_book3s_hv_page_fault+0x48/0x1300 [c000003fba877988] [c00000000013dc08] kvmppc_vcpu_run_hv+0x1808/0x1b50 [c000003fba877af8] [c000000000126b44] kvmppc_vcpu_run+0x34/0x50 [c000003fba877b18] [c000000000123268] kvm_arch_vcpu_ioctl_run+0x288/0x2d0 [c000003fba877b98] [c00000000011253c] kvm_vcpu_ioctl+0x1fc/0x8c0 [c000003fba877d08] [c0000000004e9b24] do_vfs_ioctl+0xa44/0xae0 [c000003fba877db8] [c0000000004e9c44] ksys_ioctl+0x84/0xf0 [c000003fba877e08] [c0000000004e9cd8] sys_ioctl+0x28/0x80 We removed the pte_protnone check earlier with the understanding that we mark the pte invalid before the set_pte/set_pmd usage. But the huge pmd autonuma still use the set_pmd_at directly. This is ok because a protnone pte won't have translation cache in TLB. Fixes: da7ad366 ("powerpc/mm/book3s: Update pmd_present to look at _PAGE_PRESENT bit") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
Some of our Makefiles don't do the right thing when building the selftests with O=, fix them up. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
If CONFIG_PPC_SPLPAR is not selected, steal_time will always be NUL, so accounting it is pointless Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-