- 29 Mar, 2021 20 commits
-
-
Christophe Leroy authored
Ensure normal exception handler are able to manage stuff with MMU enabled. For that we use CONFIG_VMAP_STACK related code allthough there is no intention to really activate CONFIG_VMAP_STACK on powerpc 40x for the moment. 40x uses SPRN_DEAR instead of SPRN_DAR and SPRN_ESR instead of SPRN_DSISR. Take it into account in common macros. 40x MSR value doesn't fit on 15 bits, use LOAD_REG_IMMEDIATE() in common macros that will be used also with 40x. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/01963af2b83037bca270d7bf1336ffcf35da8282.1615552866.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
In order the enable MMU early in exception prolog, implement CONFIG_VMAP_STACK principles in critical exception prolog. There is no intention to use CONFIG_VMAP_STACK on 40x, but related code will be used to enable MMU early in exception in a later patch. Also address (critirq_ctx - PAGE_OFFSET) directly instead of using tophys() in order to win one instruction. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3fd75ee54c48307119acdbf66cfea966c1463bbd.1615552866.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
In order to ease preparation for CONFIG_VMAP_STACK, reorder a few instruction, especially save r1 into stack frame earlier. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c895ecf958c86d1736bdd2ff6f36626b55f35fd2.1615552866.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
In order to be able to switch MMU on in exception prolog, save SRR0 and SRR1 earlier. Also save r10 and r11 into stack earlier to better match with the normal exception prolog. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/79a93f253d72dc97ac968c9c62b5066960b688ed.1615552866.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Change CRITICAL_EXCEPTION_PROLOG macro to a gas macro to remove the ugly ; and \ on each line. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/73291fb9dc9ec58182c27a40dfc3db204e3f4024.1615552866.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
SPRN_SPRG_SCRATCH5 is used to save SPRN_PID. SPRN_SPRG_SCRATCH6 is already available. SPRN_PID is only 8 bits. We have r12 that contains CR. We only need to preserve CR0, so we have space available in r12 to save PID. Keep PID in r12 and free up SPRN_SPRG_SCRATCH5. Then In TLB miss handlers, instead of using SPRN_SPRG_SCRATCH0 and SPRN_SPRG_SCRATCH1, use SPRN_SPRG_SCRATCH5 and SPRN_SPRG_SCRATCH6 to avoid future conflicts with normal exception prologs. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4cdaa85d38e14d594ba902424060ec55babf2c42.1615552866.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
unrecoverable_exception() is never expected to return, most callers have an infiniteloop in case it returns. Ensure it really never returns by terminating it with a BUG(), and declare it __no_return. It always GCC to really simplify functions calling it. In the exemple below, it avoids the stack frame in the likely fast path and avoids code duplication for the exit. With this patch: 00000348 <interrupt_exit_kernel_prepare>: 348: 81 43 00 84 lwz r10,132(r3) 34c: 71 48 00 02 andi. r8,r10,2 350: 41 82 00 2c beq 37c <interrupt_exit_kernel_prepare+0x34> 354: 71 4a 40 00 andi. r10,r10,16384 358: 40 82 00 20 bne 378 <interrupt_exit_kernel_prepare+0x30> 35c: 80 62 00 70 lwz r3,112(r2) 360: 74 63 00 01 andis. r3,r3,1 364: 40 82 00 28 bne 38c <interrupt_exit_kernel_prepare+0x44> 368: 7d 40 00 a6 mfmsr r10 36c: 7c 11 13 a6 mtspr 81,r0 370: 7c 12 13 a6 mtspr 82,r0 374: 4e 80 00 20 blr 378: 48 00 00 00 b 378 <interrupt_exit_kernel_prepare+0x30> 37c: 94 21 ff f0 stwu r1,-16(r1) 380: 7c 08 02 a6 mflr r0 384: 90 01 00 14 stw r0,20(r1) 388: 48 00 00 01 bl 388 <interrupt_exit_kernel_prepare+0x40> 388: R_PPC_REL24 unrecoverable_exception 38c: 38 e2 00 70 addi r7,r2,112 390: 3d 00 00 01 lis r8,1 394: 7c c0 38 28 lwarx r6,0,r7 398: 7c c6 40 78 andc r6,r6,r8 39c: 7c c0 39 2d stwcx. r6,0,r7 3a0: 40 a2 ff f4 bne 394 <interrupt_exit_kernel_prepare+0x4c> 3a4: 38 60 00 01 li r3,1 3a8: 4b ff ff c0 b 368 <interrupt_exit_kernel_prepare+0x20> Without this patch: 00000348 <interrupt_exit_kernel_prepare>: 348: 94 21 ff f0 stwu r1,-16(r1) 34c: 93 e1 00 0c stw r31,12(r1) 350: 7c 7f 1b 78 mr r31,r3 354: 81 23 00 84 lwz r9,132(r3) 358: 71 2a 00 02 andi. r10,r9,2 35c: 41 82 00 34 beq 390 <interrupt_exit_kernel_prepare+0x48> 360: 71 29 40 00 andi. r9,r9,16384 364: 40 82 00 28 bne 38c <interrupt_exit_kernel_prepare+0x44> 368: 80 62 00 70 lwz r3,112(r2) 36c: 74 63 00 01 andis. r3,r3,1 370: 40 82 00 3c bne 3ac <interrupt_exit_kernel_prepare+0x64> 374: 7d 20 00 a6 mfmsr r9 378: 7c 11 13 a6 mtspr 81,r0 37c: 7c 12 13 a6 mtspr 82,r0 380: 83 e1 00 0c lwz r31,12(r1) 384: 38 21 00 10 addi r1,r1,16 388: 4e 80 00 20 blr 38c: 48 00 00 00 b 38c <interrupt_exit_kernel_prepare+0x44> 390: 7c 08 02 a6 mflr r0 394: 90 01 00 14 stw r0,20(r1) 398: 48 00 00 01 bl 398 <interrupt_exit_kernel_prepare+0x50> 398: R_PPC_REL24 unrecoverable_exception 39c: 80 01 00 14 lwz r0,20(r1) 3a0: 81 3f 00 84 lwz r9,132(r31) 3a4: 7c 08 03 a6 mtlr r0 3a8: 4b ff ff b8 b 360 <interrupt_exit_kernel_prepare+0x18> 3ac: 39 02 00 70 addi r8,r2,112 3b0: 3d 40 00 01 lis r10,1 3b4: 7c e0 40 28 lwarx r7,0,r8 3b8: 7c e7 50 78 andc r7,r7,r10 3bc: 7c e0 41 2d stwcx. r7,0,r8 3c0: 40 a2 ff f4 bne 3b4 <interrupt_exit_kernel_prepare+0x6c> 3c4: 38 60 00 01 li r3,1 3c8: 4b ff ff ac b 374 <interrupt_exit_kernel_prepare+0x2c> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1e883e9d93fdb256853d1434c8ad77c257349b2d.1615552866.git.christophe.leroy@csgroup.eu
-
Laurent Dufour authored
It is better to rely on the API provided by the MM layer instead of directly manipulating the mm_users field. Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com> Acked-by: Frederic Barrat <fbarrat@linux.ibm.com> Acked-by: Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210310174405.51044-1-ldufour@linux.ibm.com
-
Ravi Bangoria authored
As per ISA 3.1, prefixed instruction should not cross 64-byte boundary. So don't allow Uprobe on such prefixed instruction. There are two ways probed instruction is changed in mapped pages. First, when Uprobe is activated, it searches for all the relevant pages and replace instruction in them. In this case, if that probe is on the 64-byte unaligned prefixed instruction, error out directly. Second, when Uprobe is already active and user maps a relevant page via mmap(), instruction is replaced via mmap() code path. But because Uprobe is invalid, entire mmap() operation can not be stopped. In this case just print an error and continue. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210311091538.368590-1-ravi.bangoria@linux.ibm.com
-
Christopher M. Riedl authored
Usually sigset_t is exactly 8B which is a "trivial" size and does not warrant using __copy_from_user(). Use __get_user() directly in anticipation of future work to remove the trivial size optimizations from __copy_from_user(). The ppc32 implementation of get_sigset_t() previously called copy_from_user() which, unlike __copy_from_user(), calls access_ok(). Replacing this w/ __get_user() (no access_ok()) is fine here since both callsites in signal_32.c are preceded by an earlier access_ok(). Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-11-cmr@codefail.de
-
Daniel Axtens authored
Add uaccess blocks and use the 'unsafe' versions of functions doing user access where possible to reduce the number of times uaccess has to be opened/closed. Co-developed-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-10-cmr@codefail.de
-
Daniel Axtens authored
Add uaccess blocks and use the 'unsafe' versions of functions doing user access where possible to reduce the number of times uaccess has to be opened/closed. There is no 'unsafe' version of copy_siginfo_to_user, so move it slightly to allow for a "longer" uaccess block. Co-developed-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-9-cmr@codefail.de
-
Christopher M. Riedl authored
Previously restore_sigcontext() performed a costly KUAP switch on every uaccess operation. These repeated uaccess switches cause a significant drop in signal handling performance. Rewrite restore_sigcontext() to assume that a userspace read access window is open by replacing all uaccess functions with their 'unsafe' versions. Modify the callers to first open, call unsafe_restore_sigcontext(), and then close the uaccess window. Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-8-cmr@codefail.de
-
Christopher M. Riedl authored
Previously setup_sigcontext() performed a costly KUAP switch on every uaccess operation. These repeated uaccess switches cause a significant drop in signal handling performance. Rewrite setup_sigcontext() to assume that a userspace write access window is open by replacing all uaccess functions with their 'unsafe' versions. Modify the callers to first open, call unsafe_setup_sigcontext() and then close the uaccess window. Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-7-cmr@codefail.de
-
Christopher M. Riedl authored
Both rt_sigreturn() and handle_rt_signal_64() contain TM-related ifdefs which break-up an if/else block. Provide stubs for the ifdef-guarded TM functions and remove the need for an ifdef in rt_sigreturn(). Rework the remaining TM ifdef in handle_rt_signal64() similar to commit f1cf4f93 ("powerpc/signal32: Remove ifdefery in middle of if/else"). Unlike in the commit for ppc32, the ifdef can't be removed entirely since uc_transact in sigframe depends on CONFIG_PPC_TRANSACTIONAL_MEM. Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-6-cmr@codefail.de
-
Christopher M. Riedl authored
Unlike the other MSR_TM_* macros, MSR_TM_ACTIVE does not reference or use its parameter unless CONFIG_PPC_TRANSACTIONAL_MEM is defined. This causes an 'unused variable' compile warning unless the variable is also guarded with CONFIG_PPC_TRANSACTIONAL_MEM. Reference but do nothing with the argument in the macro to avoid a potential compile warning. Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Reviewed-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-5-cmr@codefail.de
-
Christopher M. Riedl authored
The majority of setup_sigcontext() can be refactored to execute in an "unsafe" context assuming an open uaccess window except for some non-inline function calls. Move these out into a separate prepare_setup_sigcontext() function which must be called first and before opening up a uaccess window. Non-inline function calls should be avoided during a uaccess window for a few reasons: - KUAP should be enabled for as much kernel code as possible. Opening a uaccess window disables KUAP which means any code executed during this time contributes to a potential attack surface. - Non-inline functions default to traceable which means they are instrumented for ftrace. This adds more code which could run with KUAP disabled. - Powerpc does not currently support the objtool UACCESS checks. All code running with uaccess must be audited manually which means: less code -> less work -> fewer problems (in theory). A follow-up commit converts setup_sigcontext() to be "unsafe". Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-4-cmr@codefail.de
-
Christopher M. Riedl authored
Reuse the "safe" implementation from signal.c but call unsafe_get_user() directly in a loop to avoid the intermediate copy into a local buffer. Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Reviewed-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-3-cmr@codefail.de
-
Christopher M. Riedl authored
Use the same approach as unsafe_copy_to_user() but instead call unsafe_get_user() in a loop. Signed-off-by: Christopher M. Riedl <cmr@codefail.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210227011259.11992-2-cmr@codefail.de
-
Davidlohr Bueso authored
49a7d46a (powerpc: Implement smp_cond_load_relaxed()) added busy-waiting pausing with a preferred SMT priority pattern, lowering the priority (reducing decode cycles) during the whole loop slowpath. However, data shows that while this pattern works well with simple spinlocks, queued spinlocks benefit more being kept in medium priority, with a cpu_relax() instead, being a low+medium combo on powerpc. Data is from three benchmarks on a Power9: 9008-22L 64 CPUs with 2 sockets and 8 threads per core. 1. locktorture. This is data for the lowest and most artificial/pathological level, with increasing thread counts pounding on the lock. Metrics are total ops/minute. Despite some small hits in the 4-8 range, scenarios are either neutral or favorable to this patch. +=========+==========+==========+=======+ | # tasks | vanilla | dirty | %diff | +=========+==========+==========+=======+ | 2 | 46718565 | 48751350 | 4.35 | +---------+----------+----------+-------+ | 4 | 51740198 | 50369082 | -2.65 | +---------+----------+----------+-------+ | 8 | 63756510 | 62568821 | -1.86 | +---------+----------+----------+-------+ | 16 | 67824531 | 70966546 | 4.63 | +---------+----------+----------+-------+ | 32 | 53843519 | 61155508 | 13.58 | +---------+----------+----------+-------+ | 64 | 53005778 | 53104412 | 0.18 | +---------+----------+----------+-------+ | 128 | 53331980 | 54606910 | 2.39 | +=========+==========+==========+=======+ 2. sockperf (tcp throughput) Here a client will do one-way throughput tests to a localhost server, with increasing message sizes, dealing with the sk_lock. This patch shows to put the performance of the qspinlock back to par with that of the simple lock: simple-spinlock vanilla dirty Hmean 14 73.50 ( 0.00%) 54.44 * -25.93%* 73.45 * -0.07%* Hmean 100 654.47 ( 0.00%) 385.61 * -41.08%* 771.43 * 17.87%* Hmean 300 2719.39 ( 0.00%) 2181.67 * -19.77%* 2666.50 * -1.94%* Hmean 500 4400.59 ( 0.00%) 3390.77 * -22.95%* 4322.14 * -1.78%* Hmean 850 6726.21 ( 0.00%) 5264.03 * -21.74%* 6863.12 * 2.04%* 3. dbench (tmpfs) Configured to run with up to ncpusx8 clients, it shows both latency and throughput metrics. For the latency, with the exception of the 64 case, there is really nothing to go by: vanilla dirty Amean latency-1 1.67 ( 0.00%) 1.67 * 0.09%* Amean latency-2 2.15 ( 0.00%) 2.08 * 3.36%* Amean latency-4 2.50 ( 0.00%) 2.56 * -2.27%* Amean latency-8 2.49 ( 0.00%) 2.48 * 0.31%* Amean latency-16 2.69 ( 0.00%) 2.72 * -1.37%* Amean latency-32 2.96 ( 0.00%) 3.04 * -2.60%* Amean latency-64 7.78 ( 0.00%) 8.17 * -5.07%* Amean latency-512 186.91 ( 0.00%) 186.41 * 0.27%* For the dbench4 Throughput (misleading but traditional) there's a small but rather constant improvement: vanilla dirty Hmean 1 849.13 ( 0.00%) 851.51 * 0.28%* Hmean 2 1664.03 ( 0.00%) 1663.94 * -0.01%* Hmean 4 3073.70 ( 0.00%) 3104.29 * 1.00%* Hmean 8 5624.02 ( 0.00%) 5694.16 * 1.25%* Hmean 16 9169.49 ( 0.00%) 9324.43 * 1.69%* Hmean 32 11969.37 ( 0.00%) 12127.09 * 1.32%* Hmean 64 15021.12 ( 0.00%) 15243.14 * 1.48%* Hmean 512 14891.27 ( 0.00%) 15162.11 * 1.82%* Measuring the dbench4 Per-VFS Operation latency, shows some very minor differences within the noise level, around the 0-1% ranges. Fixes: 49a7d46a ("powerpc: Implement smp_cond_load_relaxed()") Acked-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210318204702.71417-1-dave@stgolabs.net
-
- 26 Mar, 2021 11 commits
-
-
Davidlohr Bueso authored
c6f5d02b (locking/spinlocks/arm64: Remove smp_mb() from arch_spin_is_locked()) made it pretty official that the call semantics do not imply any sort of barriers, and any user that gets creative must explicitly do any serialization. This creativity, however, is nowadays pretty limited: 1. spin_unlock_wait() has been removed from the kernel in favor of a lock/unlock combo. Furthermore, queued spinlocks have now for a number of years no longer relied on _Q_LOCKED_VAL for the call, but any non-zero value to indicate a locked state. There were cases where the delayed locked store could lead to breaking mutual exclusion with crossed locking; such as with sysv ipc and netfilter being the most extreme. 2. The auditing Andrea did in verified that remaining spin_is_locked() no longer rely on such semantics. Most callers just use it to assert a lock is taken, in a debug nature. The only user that gets cute is NOLOCK qdisc, as of: 96009c7d (sched: replace __QDISC_STATE_RUNNING bit with a spin lock) ... which ironically went in the next day after c6f5d02b. This change replaces test_bit() with spin_is_locked() to know whether to take the busylock heuristic to reduce contention on the main qdisc lock. So any races against spin_is_locked() for archs that use LL/SC for spin_lock() will be benign and not break any mutual exclusion; furthermore, both the seqlock and busylock have the same scope. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210309015950.27688-3-dave@stgolabs.net
-
Davidlohr Bueso authored
Instead of both queued and simple spinlocks doing it. Move it into the arch's spinlock.h. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210309015950.27688-2-dave@stgolabs.net
-
Christophe Leroy authored
Use user access block in gpr32_set_common() instead of repetitive __get_user() which imply repetitive KUAP open/close. To get it clean, force inlining of the small set of tiny functions called inside the block. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/bdcb8652c3bb4ab5b8b3bfd08147434be8fc04c9.1615398498.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Use user_access_begin() instead of the access_ok/allow_access sequence. This brings the missing might_fault() check. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6cd202cdc4f939d47822e4ddd3c0856210431a58.1615398498.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Use user_access_begin() instead of the might_sleep/access_ok/allow_access sequence. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2dee286d2d6dc9a27d99e31ac564bad4fae2cb49.1615398498.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
__put_user_asm_goto() is internal to uaccess.h Use __put_kernel_nofault() instead. The generated code is identical. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3e32c4f0361933909368b68f5ee569e5de661c1b.1615398498.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Instead of opencodying the copy of parameters, use the generic sys_old_select(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4de983ad254739da1fe6e9f273baf387b7043ae0.1615398498.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
copy_mc_xxx() functions are in the middle of raw_copy functions. For clarity, move them out of the raw_copy functions block. They are using access_ok, so they need to be after the general functions in order to eventually allow the inclusion of asm-generic/uaccess.h in some future. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2cdecb6e5a2fcee6c158d18dd254b71ec0e0da4d.1615398498.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
It is clear_user() which is expected to call __clear_user(), not the reverse. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d8ec01fb22f33d87321451d5e5f01cb56dacaa39.1615398498.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
ppc32 has an efficiant 64 bits __put_user(), so also use it in order to unroll loops more. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ccc08a16eea682d6fa4acc957ffe34003a8f0844.1615398498.git.christophe.leroy@csgroup.eu
-
Laurent Dufour authored
This is helpful to read the security flavor from inside the LPAR. In /sys/kernel/debug/powerpc/security_features it can be seen if mitigations are on or off but not the level set through the ASMI menu. Furthermore, reporting it through /proc/powerpc/lparcfg allows an easy processing by the lparstat command [1]. Export it like this in /proc/powerpc/lparcfg: $ grep security_flavor /proc/powerpc/lparcfg security_flavor=1 Value follows what is documented on the IBM support page [2]: 0 Speculative execution fully enabled 1 Speculative execution controls to mitigate user-to-kernel attacks 2 Speculative execution controls to mitigate user-to-kernel and user-to-user side-channel attacks [1] https://groups.google.com/g/powerpc-utils-devel/c/NaKXvdyl_UI/m/wa2stpIDAQAJ [2] https://www.ibm.com/support/pages/node/715841Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210305125554.5165-1-ldufour@linux.ibm.com
-
- 24 Mar, 2021 9 commits
-
-
Christophe Leroy authored
Add architecture specific implementation details for KFENCE and enable KFENCE for the ppc32 architecture. In particular, this implements the required interface in <asm/kfence.h>. KFENCE requires that attributes for pages from its memory pool can individually be set. Therefore, force the Read/Write linear map to be mapped at page granularity. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by: Marco Elver <elver@google.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8dfe1bd2abde26337c1d8c1ad0acfcc82185e0d5.1614868445.git.christophe.leroy@csgroup.eu
-
Denis Efremov authored
"offsetof(struct pt_regs, msr) == offsetof(struct user_pt_regs, msr)" checked in pt_regs_check() twice in a row. Remove the second check. Signed-off-by: Denis Efremov <efremov@linux.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210305112807.26299-1-efremov@linux.com
-
Lee Jones authored
Fixes the following W=1 kernel build warning(s): drivers/tty/hvc/hvc_vio.c:385:13: warning: no previous prototype for ‘hvc_vio_init_early’ 385 | void __init hvc_vio_init_early(void) | ^~~~~~~~~~~~~~~~~~ Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210303124603.3150175-1-lee.jones@linaro.org
-
Zhang Yunkai authored
The comment marking the end of the include guard is wrong, fix it up. Signed-off-by: Zhang Yunkai <zhang.yunkai@zte.com.cn> [mpe: Rewrite commit message] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210304031318.188447-1-zhang.yunkai@zte.com.cn
-
Zhang Yunkai authored
asm/tm.h included in traps.c is duplicated. It is also included on the 62nd line. asm/udbg.h included in setup-common.c is duplicated. It is also included on the 61st line. asm/bug.h included in arch/powerpc/include/asm/book3s/64/mmu-hash.h is duplicated. It is also included on the 12th line. asm/tlbflush.h included in arch/powerpc/include/asm/pgtable.h is duplicated. It is also included on the 11th line. asm/page.h included in arch/powerpc/include/asm/thread_info.h is duplicated. It is also included on the 13th line. Signed-off-by: Zhang Yunkai <zhang.yunkai@zte.com.cn> [mpe: Squash together from multiple commits] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Nathan Chancellor authored
If identical_pvr_fixup() is not inlined, there are two modpost warnings: WARNING: modpost: vmlinux.o(.text+0x54e8): Section mismatch in reference from the function identical_pvr_fixup() to the function .init.text:of_get_flat_dt_prop() The function identical_pvr_fixup() references the function __init of_get_flat_dt_prop(). This is often because identical_pvr_fixup lacks a __init annotation or the annotation of of_get_flat_dt_prop is wrong. WARNING: modpost: vmlinux.o(.text+0x551c): Section mismatch in reference from the function identical_pvr_fixup() to the function .init.text:identify_cpu() The function identical_pvr_fixup() references the function __init identify_cpu(). This is often because identical_pvr_fixup lacks a __init annotation or the annotation of identify_cpu is wrong. identical_pvr_fixup() calls two functions marked as __init and is only called by a function marked as __init so it should be marked as __init as well. At the same time, remove the inline keywork as it is not necessary to inline this function. The compiler is still free to do so if it feels it is worthwhile since commit 889b3c12 ("compiler: remove CONFIG_OPTIMIZE_INLINING entirely"). Fixes: 14b3d926 ("[POWERPC] 4xx: update 440EP(x)/440GR(x) identical PVR issue workaround") Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://github.com/ClangBuiltLinux/linux/issues/1316 Link: https://lore.kernel.org/r/20210302200829.2680663-1-nathan@kernel.org
-
Nathan Chancellor authored
If fadump_calculate_reserve_size() is not inlined, there is a modpost warning: WARNING: modpost: vmlinux.o(.text+0x5196c): Section mismatch in reference from the function fadump_calculate_reserve_size() to the function .init.text:parse_crashkernel() The function fadump_calculate_reserve_size() references the function __init parse_crashkernel(). This is often because fadump_calculate_reserve_size lacks a __init annotation or the annotation of parse_crashkernel is wrong. fadump_calculate_reserve_size() calls parse_crashkernel(), which is marked as __init and fadump_calculate_reserve_size() is called from within fadump_reserve_mem(), which is also marked as __init. Mark fadump_calculate_reserve_size() as __init to fix the section mismatch. Additionally, remove the inline keyword as it is not necessary to inline this function; the compiler is still free to do so if it feels it is worthwhile since commit 889b3c12 ("compiler: remove CONFIG_OPTIMIZE_INLINING entirely"). Fixes: 11550dc0 ("powerpc/fadump: reuse crashkernel parameter for fadump memory reservation") Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://github.com/ClangBuiltLinux/linux/issues/1300 Link: https://lore.kernel.org/r/20210302195013.2626335-1-nathan@kernel.org
-
Russell Currey authored
The rfi_flush and entry_flush selftests work by using the PM_LD_MISS_L1 perf event to count L1D misses. The value of this event has changed over time: - Power7 uses 0x400f0 - Power8 and Power9 use both 0x400f0 and 0x3e054 - Power10 uses only 0x3e054 Rather than relying on raw values, configure perf to count L1D read misses in the most explicit way available. This fixes the selftests to work on systems without 0x400f0 as PM_LD_MISS_L1, and should change no behaviour for systems that the tests already worked on. The only potential downside is that referring to a specific perf event requires PMU support implemented in the kernel for that platform. Signed-off-by: Russell Currey <ruscur@russell.cc> Acked-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210223070227.2916871-1-ruscur@russell.cc
-
Bhaskar Chowdhury authored
s/droping/dropping/ Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com> Acked-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210224075547.763063-1-unixbhaskar@gmail.com
-