- 03 Dec, 2020 39 commits
-
-
Christophe Leroy authored
Reorder actions in save_user_regs() and save_tm_user_regs() to regroup copies together in order to switch to user_access_begin() logic in a later patch. Move non-copy actions into new functions called prepare_save_user_regs() and prepare_save_tm_user_regs(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f6eac65781b4a57220477c8864bca2b57f29a5d5.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
For the non VSX version, that's trivial. Just use unsafe_copy_to_user() instead of __copy_to_user(). For the VSX version, remove the intermediate step through a buffer and use unsafe_put_user() directly. This generates a far smaller code which is acceptable to inline, see below: Standard VSX version: 0000000000000000 <.copy_fpr_to_user>: 0: 7c 08 02 a6 mflr r0 4: fb e1 ff f8 std r31,-8(r1) 8: 39 00 00 20 li r8,32 c: 39 24 0b 80 addi r9,r4,2944 10: 7d 09 03 a6 mtctr r8 14: f8 01 00 10 std r0,16(r1) 18: f8 21 fe 71 stdu r1,-400(r1) 1c: 39 41 00 68 addi r10,r1,104 20: e9 09 00 00 ld r8,0(r9) 24: 39 4a 00 08 addi r10,r10,8 28: 39 29 00 10 addi r9,r9,16 2c: f9 0a 00 00 std r8,0(r10) 30: 42 00 ff f0 bdnz 20 <.copy_fpr_to_user+0x20> 34: e9 24 0d 80 ld r9,3456(r4) 38: 3d 42 00 00 addis r10,r2,0 3a: R_PPC64_TOC16_HA .toc 3c: eb ea 00 00 ld r31,0(r10) 3e: R_PPC64_TOC16_LO_DS .toc 40: f9 21 01 70 std r9,368(r1) 44: e9 3f 00 00 ld r9,0(r31) 48: 81 29 00 20 lwz r9,32(r9) 4c: 2f 89 00 00 cmpwi cr7,r9,0 50: 40 9c 00 18 bge cr7,68 <.copy_fpr_to_user+0x68> 54: 4c 00 01 2c isync 58: 3d 20 40 00 lis r9,16384 5c: 79 29 07 c6 rldicr r9,r9,32,31 60: 7d 3d 03 a6 mtspr 29,r9 64: 4c 00 01 2c isync 68: 38 a0 01 08 li r5,264 6c: 38 81 00 70 addi r4,r1,112 70: 48 00 00 01 bl 70 <.copy_fpr_to_user+0x70> 70: R_PPC64_REL24 .__copy_tofrom_user 74: 60 00 00 00 nop 78: e9 3f 00 00 ld r9,0(r31) 7c: 81 29 00 20 lwz r9,32(r9) 80: 2f 89 00 00 cmpwi cr7,r9,0 84: 40 9c 00 18 bge cr7,9c <.copy_fpr_to_user+0x9c> 88: 4c 00 01 2c isync 8c: 39 20 ff ff li r9,-1 90: 79 29 00 44 rldicr r9,r9,0,1 94: 7d 3d 03 a6 mtspr 29,r9 98: 4c 00 01 2c isync 9c: 38 21 01 90 addi r1,r1,400 a0: e8 01 00 10 ld r0,16(r1) a4: eb e1 ff f8 ld r31,-8(r1) a8: 7c 08 03 a6 mtlr r0 ac: 4e 80 00 20 blr 'unsafe' simulated VSX version (The ... are only nops) using unsafe_copy_fpr_to_user() macro: unsigned long copy_fpr_to_user(void __user *to, struct task_struct *task) { unsafe_copy_fpr_to_user(to, task, failed); return 0; failed: return 1; } 0000000000000000 <.copy_fpr_to_user>: 0: 39 00 00 20 li r8,32 4: 39 44 0b 80 addi r10,r4,2944 8: 7d 09 03 a6 mtctr r8 c: 7c 69 1b 78 mr r9,r3 ... 20: e9 0a 00 00 ld r8,0(r10) 24: f9 09 00 00 std r8,0(r9) 28: 39 4a 00 10 addi r10,r10,16 2c: 39 29 00 08 addi r9,r9,8 30: 42 00 ff f0 bdnz 20 <.copy_fpr_to_user+0x20> 34: e9 24 0d 80 ld r9,3456(r4) 38: f9 23 01 00 std r9,256(r3) 3c: 38 60 00 00 li r3,0 40: 4e 80 00 20 blr ... 50: 38 60 00 01 li r3,1 54: 4e 80 00 20 blr Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/29f6c4b8e7a5bbc61e6a8801b78bbf493f9f819e.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
As this was the last user of put_sigset_t(), remove it as well. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c3ac4f2d134a3391bb51bdaa2d00e9a409aba9f8.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
put_sigset_t() calls copy_to_user() for copying two words. This is terribly inefficient for copying two words. By switching to unsafe_put_user(), we end up with something as simple as: 3cc: 81 3d 00 00 lwz r9,0(r29) 3d0: 91 26 00 b4 stw r9,180(r6) 3d4: 81 3d 00 04 lwz r9,4(r29) 3d8: 91 26 00 b8 stw r9,184(r6) Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/06def97e87ac1c4ae8e3197e0982e1fab7b3c8ae.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Implement 'unsafe' version of put_compat_sigset() For the bigendian, use unsafe_put_user() directly to avoid intermediate copy through the stack. For the littleendian, use a straight unsafe_copy_to_user(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/537c7082ee309a0bb9c67a50c5d9dd929aedb82d.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
MSR_TM_ACTIVE() is always defined and returns always 0 when CONFIG_PPC_TRANSACTIONAL_MEM is not selected, so the awful ifdefery in the middle of an if/else can be removed. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f3c36d687e4228f58d5c207a4036aa9ddcc7420a.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
On the same way as handle_signal32(), replace all user accesses with equivalent unsafe_ versions, and move the trampoline code icache flush outside the user access block. Functions that have no unsafe_ equivalent also remains outside the access block. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2974314226256f958e2984912b48883ef1754185.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Replace the access_ok() by user_access_begin() and change all user accesses to unsafe_ version. Move flush_icache_range() outside the user access block. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a27797f781aa00da96f8284c898173d18e952361.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Move signal trampoline setup into handle_signal32() and handle_rt_signal32(). At the same time, remove the define which hides the mc_pad field used for trampoline. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e439cc0fa35aa45da6776520777a61848b92fd4b.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Miscellaneous changes to clean and make handle_signal32() and handle_rt_signal32() even more similar. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/df0bc8c3b8fa96390c46f611df79b2a94ac21844.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Rename pointers in handle_rt_signal32() to make it more similar to handle_signal32() tm_frame becomes tm_mctx frame becomes mctx rt_sf becomes frame Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/be77477b0f05397876015b218e36548ee8f5e10b.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Those two functions are similar and serving the same purpose. To ease refactorisation, move them close to each other. This is pure move, no code change, no cosmetic. Yes, checkpatch is not happy, most will clear later. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/dbce67900bf566bcf40179467bf1eb500814c405.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
If something is bad in the frame, there is no point in knowing which part of the frame exactly is wrong as it got allocated as a single block. Always print the root address of the frame in case of failed user access, just like handle_signal32(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/691895bd31fee89a2d8370befd66ad4eff5b63f2.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
The logging of bad frame appears half a dozen of times and is pretty similar. Create signal_fault() fonction to perform that logging. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/fa094445c119fc00315e1c13783b493346306c6a.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Instead of calling get_tm_stackpointer() from the caller, call it directly from get_sigframe(). This avoids a double call and allows get_tm_stackpointer() to become static and be inlined into get_sigframe() by GCC. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/abfdc105b8b28c4eb3ab9a26297d17f302b600ea.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
get_clean_sp() is only used once in kernel/signal.c . GCC is smart enough to see that x & 0xffffffff is a nop calculation on PPC32, no need of a special PPC32 trivial version. Include the logic from the PPC64 version of get_clean_sp() directly in get_sigframe(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/13ef6510ce30a4867e043157b93af5bb8c67fb3b.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
This access_ok() will soon be performed by user_access_begin(). So move it out of get_sigframe(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/900b93744732ed0887f28f5b6a40730fb04a43fa.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
There is already the same BUG_ON() check in do_signal() which is the only caller of handle_rt_signal64() handle_rt_signal32() and handle_signal32(). Remove those three redundant BUG_ON(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3582e10a341d523c9c3f1ac925c3aaefc9d9293d.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
The e300c2 core which is embedded in mpc832x CPU doesn't have an FPU. Make it possible to not select CONFIG_PPC_FPU when building a kernel dedicated to that target. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/fcdc60d85baf80eaa0a7f3261d9d889282068216.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
There is no point in copying floating point regs when there is no FPU and MATH_EMULATION is not selected. Create a new CONFIG_PPC_FPU_REGS bool that is selected by CONFIG_MATH_EMULATION and CONFIG_PPC_FPU, and use it to opt out everything related to fp_state in thread_struct. The asm const used only by fpu.S are opted out with CONFIG_PPC_FPU as fpu.S build is conditionnal to CONFIG_PPC_FPU. The following app spends approx 8.1 seconds system time on an 8xx without the patch, and 7.0 seconds with the patch (13.5% reduction). On an 832x, it spends approx 2.6 seconds system time without the patch and 2.1 seconds with the patch (19% reduction). void sigusr1(int sig) { } int main(int argc, char **argv) { int i = 100000; signal(SIGUSR1, sigusr1); for (;i--;) raise(SIGUSR1); exit(0); } Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7569070083e6cd5b279bb5023da601aba3c06f3c.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
On the same model as ptrace_get_reg() and ptrace_put_reg(), create ptrace_get_fpr() and ptrace_put_fpr() to get/set the floating points registers. We move the boundary checkings in them. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/24a1baedea7f7ae7b6bf27be98bab6d01b5ca2c1.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Today we have: #ifdef CONFIG_PPC32 index = addr >> 2; if ((addr & 3) || child->thread.regs == NULL) #else index = addr >> 3; if ((addr & 7)) #endif sizeof(long) has value 4 for PPC32 and value 8 for PPC64. Dividing by 4 is equivalent to >> 2 and dividing by 8 is equivalent to >> 3. And 3 and 7 are respectively (sizeof(long) - 1). Use sizeof(long) to get rid of the #ifdef CONFIG_PPC32 and consolidate the calculation and checking. thread.regs have to be not NULL on both PPC32 and PPC64 so adding that test on PPC64 is harmless. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3cd1e284e93c60db981659585e18d1f6bb73ed2f.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
ptrace_get_reg() and ptrace_set_reg() are only used internally by ptrace. Move them in arch/powerpc/kernel/ptrace/ptrace-decl.h Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/376c258267aeae54a4423bc4a2e107a9611f0039.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
To really be inlined, the functions need to be defined in the same C file as the caller, or in an included header. Move functions defined inline from signal .c in signal.h Fixes: 3dd4eb83 ("powerpc: move common register copy functions from signal_32.c to signal.c") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/35b1bd44a1a66f5bcf9b457a1c480ac8d5ef50b2.1597770847.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Provides __kernel_clock_gettime64() on vdso32. This is the 64 bits version of __kernel_clock_gettime() which is y2038 compliant. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201126131006.2431205-9-mpe@ellerman.id.au
-
Christophe Leroy authored
With the C VDSO, the performance is slightly lower, but it is worth it as it will ease maintenance and evolution, and also brings clocks that are not supported with the ASM VDSO. On an 8xx at 132 MHz, vdsotest with the ASM VDSO: gettimeofday: vdso: 828 nsec/call clock-getres-realtime-coarse: vdso: 391 nsec/call clock-gettime-realtime-coarse: vdso: 614 nsec/call clock-getres-realtime: vdso: 460 nsec/call clock-gettime-realtime: vdso: 876 nsec/call clock-getres-monotonic-coarse: vdso: 399 nsec/call clock-gettime-monotonic-coarse: vdso: 691 nsec/call clock-getres-monotonic: vdso: 460 nsec/call clock-gettime-monotonic: vdso: 1026 nsec/call On an 8xx at 132 MHz, vdsotest with the C VDSO: gettimeofday: vdso: 955 nsec/call clock-getres-realtime-coarse: vdso: 545 nsec/call clock-gettime-realtime-coarse: vdso: 592 nsec/call clock-getres-realtime: vdso: 545 nsec/call clock-gettime-realtime: vdso: 941 nsec/call clock-getres-monotonic-coarse: vdso: 545 nsec/call clock-gettime-monotonic-coarse: vdso: 591 nsec/call clock-getres-monotonic: vdso: 545 nsec/call clock-gettime-monotonic: vdso: 940 nsec/call It is even better for gettime with monotonic clocks. Unsupported clocks with ASM VDSO: clock-gettime-boottime: vdso: 3851 nsec/call clock-gettime-tai: vdso: 3852 nsec/call clock-gettime-monotonic-raw: vdso: 3396 nsec/call Same clocks with C VDSO: clock-gettime-tai: vdso: 941 nsec/call clock-gettime-monotonic-raw: vdso: 1001 nsec/call clock-gettime-monotonic-coarse: vdso: 591 nsec/call On an 8321E at 333 MHz, vdsotest with the ASM VDSO: gettimeofday: vdso: 220 nsec/call clock-getres-realtime-coarse: vdso: 102 nsec/call clock-gettime-realtime-coarse: vdso: 178 nsec/call clock-getres-realtime: vdso: 129 nsec/call clock-gettime-realtime: vdso: 235 nsec/call clock-getres-monotonic-coarse: vdso: 105 nsec/call clock-gettime-monotonic-coarse: vdso: 208 nsec/call clock-getres-monotonic: vdso: 129 nsec/call clock-gettime-monotonic: vdso: 274 nsec/call On an 8321E at 333 MHz, vdsotest with the C VDSO: gettimeofday: vdso: 272 nsec/call clock-getres-realtime-coarse: vdso: 160 nsec/call clock-gettime-realtime-coarse: vdso: 184 nsec/call clock-getres-realtime: vdso: 166 nsec/call clock-gettime-realtime: vdso: 281 nsec/call clock-getres-monotonic-coarse: vdso: 160 nsec/call clock-gettime-monotonic-coarse: vdso: 184 nsec/call clock-getres-monotonic: vdso: 169 nsec/call clock-gettime-monotonic: vdso: 275 nsec/call On a Power9 Nimbus DD2.2 at 3.8GHz, with the ASM VDSO: clock-gettime-monotonic: vdso: 35 nsec/call clock-getres-monotonic: vdso: 16 nsec/call clock-gettime-monotonic-coarse: vdso: 18 nsec/call clock-getres-monotonic-coarse: vdso: 522 nsec/call clock-gettime-monotonic-raw: vdso: 598 nsec/call clock-getres-monotonic-raw: vdso: 520 nsec/call clock-gettime-realtime: vdso: 34 nsec/call clock-getres-realtime: vdso: 16 nsec/call clock-gettime-realtime-coarse: vdso: 18 nsec/call clock-getres-realtime-coarse: vdso: 517 nsec/call getcpu: vdso: 8 nsec/call gettimeofday: vdso: 25 nsec/call And with the C VDSO: clock-gettime-monotonic: vdso: 37 nsec/call clock-getres-monotonic: vdso: 20 nsec/call clock-gettime-monotonic-coarse: vdso: 21 nsec/call clock-getres-monotonic-coarse: vdso: 19 nsec/call clock-gettime-monotonic-raw: vdso: 38 nsec/call clock-getres-monotonic-raw: vdso: 20 nsec/call clock-gettime-realtime: vdso: 37 nsec/call clock-getres-realtime: vdso: 20 nsec/call clock-gettime-realtime-coarse: vdso: 20 nsec/call clock-getres-realtime-coarse: vdso: 19 nsec/call getcpu: vdso: 8 nsec/call gettimeofday: vdso: 28 nsec/call Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201126131006.2431205-8-mpe@ellerman.id.au
-
Christophe Leroy authored
On PPC64, the TOC pointer needs to be saved and restored. Suggested-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201126131006.2431205-7-mpe@ellerman.id.au
-
Christophe Leroy authored
Prepare for switching VDSO to generic C implementation in following patch. Here, we: - Prepare the helpers to call the C VDSO functions - Prepare the required callbacks for the C VDSO functions - Prepare the clocksource.h files to define VDSO_ARCH_CLOCKMODES - Add the C trampolines to the generic C VDSO functions powerpc is a bit special for VDSO as well as system calls in the way that it requires setting CR SO bit which cannot be done in C. Therefore, entry/exit needs to be performed in ASM. Implementing __arch_get_vdso_data() would clobber the link register, requiring the caller to save it. As the ASM calling function already has to set a stack frame and saves the link register before calling the C vdso function, retriving the vdso data pointer there is lighter. Implement __arch_vdso_capable() and always return true. Provide vdso_shift_ns(), as the generic x >> s gives the following bad result: 18: 35 25 ff e0 addic. r9,r5,-32 1c: 41 80 00 10 blt 2c <shift+0x14> 20: 7c 64 4c 30 srw r4,r3,r9 24: 38 60 00 00 li r3,0 ... 2c: 54 69 08 3c rlwinm r9,r3,1,0,30 30: 21 45 00 1f subfic r10,r5,31 34: 7c 84 2c 30 srw r4,r4,r5 38: 7d 29 50 30 slw r9,r9,r10 3c: 7c 63 2c 30 srw r3,r3,r5 40: 7d 24 23 78 or r4,r9,r4 In our case the shift is always <= 32. In addition, the upper 32 bits of the result are likely nul. Lets GCC know it, it also optimises the following calculations. With the patch, we get: 0: 21 25 00 20 subfic r9,r5,32 4: 7c 69 48 30 slw r9,r3,r9 8: 7c 84 2c 30 srw r4,r4,r5 c: 7d 24 23 78 or r4,r9,r4 10: 7c 63 2c 30 srw r3,r3,r5 Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201126131006.2431205-6-mpe@ellerman.id.au
-
Michael Ellerman authored
Currently we use ifdef __powerpc64__ in barrier.h to decide if we should use lwsync or eieio for SMPWMB which is then used by __smp_wmb(). That means when we are building the compat VDSO we will use eieio, because it's 32-bit code, even though we're building a 64-bit kernel for a 64-bit CPU. Although eieio should work, it would be cleaner if we always used the same barrier, even for the 32-bit VDSO. So change the ifdef to CONFIG_PPC64, so that the selection is made based on the bitness of the kernel we're building for, not the current compilation unit. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201126131006.2431205-5-mpe@ellerman.id.au
-
Michael Ellerman authored
When we're building the compat VDSO we are building 32-bit code but in the context of a 64-bit kernel configuration. To make this work we need to be careful in some places when using ifdefs to differentiate between CONFIG_PPC64 and __powerpc64__. CONFIG_PPC64 indicates the kernel we're building is 64-bit, but it doesn't tell us that we're currently building 64-bit code - we could be building 32-bit code for the compat VDSO. On the other hand __powerpc64__ tells us that we are currently building 64-bit code (and therefore we must also be building a 64-bit kernel). In the case of get_tb() we want to use the 32-bit code sequence regardless of whether the kernel we're building for is 64-bit or 32-bit, what matters is the word size of the current object. So we need to check __powerpc64__ to decide if we use mftb() or the mftbu()/mftb() sequence. For mftb() the logic for CPU_FTR_CELL_TB_BUG only makes sense if we're building 64-bit code, so guard that with a __powerpc64__ check. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201126131006.2431205-4-mpe@ellerman.id.au
-
Christophe Leroy authored
In order to easily use get_tb() from C VDSO, move timebase functions into a new header named asm/vdso/timebase.h Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201126131006.2431205-3-mpe@ellerman.id.au
-
Christophe Leroy authored
cpu_relax() need to be in asm/vdso/processor.h to be used by the C VDSO generic library. Move it there. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201126131006.2431205-2-mpe@ellerman.id.au
-
Christophe Leroy authored
In order to build VDSO32 for PPC64, we need to have CPU_FTRS_POSSIBLE and CPU_FTRS_ALWAYS independant of whether we are building the 32 bits VDSO or the 64 bits VDSO. Use #ifdef CONFIG_PPC64 instead of #ifdef __powerpc64__ Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201126131006.2431205-1-mpe@ellerman.id.au
-
Michael Ellerman authored
Update the NUMA Kconfig description to match other architectures, and add some help text. Shamelessly borrowed from x86/arm64. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Randy Dunlap <rdunlap@infradead.org> Link: https://lore.kernel.org/r/20201124120547.1940635-3-mpe@ellerman.id.au
-
Michael Ellerman authored
Our NUMA option is default y for pseries, but not powernv. The bulk of powernv systems are NUMA, so make NUMA default y for powernv also. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Link: https://lore.kernel.org/r/20201124120547.1940635-2-mpe@ellerman.id.au
-
Michael Ellerman authored
Our Kconfig allows NUMA to be enabled without SMP, but none of our defconfigs use that combination. This means it can easily be broken inadvertently by code changes, which has happened recently. Although it's theoretically possible to have a machine with a single CPU and multiple memory nodes, I can't think of any real systems where that's the case. Even so if such a system exists, it can just run an SMP kernel anyway. So to avoid the need to add extra #ifdefs and/or build breaks, make NUMA depend on SMP. Reported-by: kernel test robot <lkp@intel.com> Reported-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Reviewed-by: Randy Dunlap <rdunlap@infradead.org> Link: https://lore.kernel.org/r/20201124120547.1940635-1-mpe@ellerman.id.au
-
Christophe Leroy authored
ioreadXX()/ioreadXXbe() accessors are equivalent to ppc in_leXX()/in_be16() accessors but they are not inlined. Since commit 0eb57368 ("powerpc/kerenl: Enable EEH for IO accessors"), the 'le' versions are equivalent to the ones defined in asm-generic/io.h, allthough the ones there are inlined. Include asm-generic/io.h to get them. Keep ppc versions of the 'be' ones as they are optimised, but make them inline in ppc io.h. This reduces the size of ppc64e_defconfig build by 3 kbytes: text data bss dec hex filename 10160733 4343422 562972 15067127 e5e7f7 vmlinux.before 10159239 4341590 562972 15063801 e5daf9 vmlinux.after A typical function using ioread and iowrite before the change: c00000000066a3c4 <.ata_bmdma_stop>: c00000000066a3c4: 7c 08 02 a6 mflr r0 c00000000066a3c8: fb c1 ff f0 std r30,-16(r1) c00000000066a3cc: f8 01 00 10 std r0,16(r1) c00000000066a3d0: fb e1 ff f8 std r31,-8(r1) c00000000066a3d4: f8 21 ff 81 stdu r1,-128(r1) c00000000066a3d8: eb e3 00 00 ld r31,0(r3) c00000000066a3dc: eb df 00 98 ld r30,152(r31) c00000000066a3e0: 7f c3 f3 78 mr r3,r30 c00000000066a3e4: 4b 9b 6f 7d bl c000000000021360 <.ioread8> c00000000066a3e8: 60 00 00 00 nop c00000000066a3ec: 7f c4 f3 78 mr r4,r30 c00000000066a3f0: 54 63 06 3c rlwinm r3,r3,0,24,30 c00000000066a3f4: 4b 9b 70 4d bl c000000000021440 <.iowrite8> c00000000066a3f8: 60 00 00 00 nop c00000000066a3fc: 7f e3 fb 78 mr r3,r31 c00000000066a400: 38 21 00 80 addi r1,r1,128 c00000000066a404: e8 01 00 10 ld r0,16(r1) c00000000066a408: eb c1 ff f0 ld r30,-16(r1) c00000000066a40c: 7c 08 03 a6 mtlr r0 c00000000066a410: eb e1 ff f8 ld r31,-8(r1) c00000000066a414: 4b ff ff 8c b c00000000066a3a0 <.ata_sff_dma_pause> The same function with this patch: c000000000669cb4 <.ata_bmdma_stop>: c000000000669cb4: e8 63 00 00 ld r3,0(r3) c000000000669cb8: e9 43 00 98 ld r10,152(r3) c000000000669cbc: 7c 00 04 ac hwsync c000000000669cc0: 89 2a 00 00 lbz r9,0(r10) c000000000669cc4: 0c 09 00 00 twi 0,r9,0 c000000000669cc8: 4c 00 01 2c isync c000000000669ccc: 55 29 06 3c rlwinm r9,r9,0,24,30 c000000000669cd0: 7c 00 04 ac hwsync c000000000669cd4: 99 2a 00 00 stb r9,0(r10) c000000000669cd8: a1 4d 06 f0 lhz r10,1776(r13) c000000000669cdc: 2c 2a 00 00 cmpdi r10,0 c000000000669ce0: 41 c2 00 08 beq- c000000000669ce8 <.ata_bmdma_stop+0x34> c000000000669ce4: b1 4d 06 f2 sth r10,1778(r13) c000000000669ce8: 4b ff ff a8 b c000000000669c90 <.ata_sff_dma_pause> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/18b357d68c4cde149f75c7a1031c850925cd8128.1605981539.git.christophe.leroy@csgroup.eu
-
Athira Rajeev authored
On systems without any specific PMU driver support registered, running 'perf record' with —intr-regs will crash ( perf record -I <workload> ). The relevant portion from crash logs and Call Trace: Unable to handle kernel paging request for data at address 0x00000068 Faulting instruction address: 0xc00000000013eb18 Oops: Kernel access of bad area, sig: 11 [#1] CPU: 2 PID: 13435 Comm: kill Kdump: loaded Not tainted 4.18.0-193.el8.ppc64le #1 NIP: c00000000013eb18 LR: c000000000139f2c CTR: c000000000393d80 REGS: c0000004a07ab4f0 TRAP: 0300 Not tainted (4.18.0-193.el8.ppc64le) NIP [c00000000013eb18] is_sier_available+0x18/0x30 LR [c000000000139f2c] perf_reg_value+0x6c/0xb0 Call Trace: [c0000004a07ab770] [c0000004a07ab7c8] 0xc0000004a07ab7c8 (unreliable) [c0000004a07ab7a0] [c0000000003aa77c] perf_output_sample+0x60c/0xac0 [c0000004a07ab840] [c0000000003ab3f0] perf_event_output_forward+0x70/0xb0 [c0000004a07ab8c0] [c00000000039e208] __perf_event_overflow+0x88/0x1a0 [c0000004a07ab910] [c00000000039e42c] perf_swevent_hrtimer+0x10c/0x1d0 [c0000004a07abc50] [c000000000228b9c] __hrtimer_run_queues+0x17c/0x480 [c0000004a07abcf0] [c00000000022aaf4] hrtimer_interrupt+0x144/0x520 [c0000004a07abdd0] [c00000000002a864] timer_interrupt+0x104/0x2f0 [c0000004a07abe30] [c0000000000091c4] decrementer_common+0x114/0x120 When perf record session is started with "-I" option, capturing registers on each sample calls is_sier_available() to check for the SIER (Sample Instruction Event Register) availability in the platform. This function in core-book3s accesses 'ppmu->flags'. If a platform specific PMU driver is not registered, ppmu is set to NULL and accessing its members results in a crash. Fix the crash by returning false in is_sier_available() if ppmu is not set. Fixes: 333804dc ("powerpc/perf: Update perf_regs structure to include SIER") Reported-by: Sachin Sant <sachinp@linux.vnet.ibm.com> Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1606185640-1720-1-git-send-email-atrajeev@linux.vnet.ibm.com
-
Alan Modra authored
Use bcl 20,31,0f rather than plain bl to avoid unbalancing the link stack. Update the code to use REL16 relocs, available for ppc64 in 2009 (and ppc32 in 2005). Signed-off-by: Alan Modra <amodra@gmail.com> [mpe: Incorporate more detail into the change log] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
- 26 Nov, 2020 1 commit
-
-
Bill Wendling authored
The clang toolchain treats inline assembly a bit differently than straight assembly code. In particular, inline assembly doesn't have the complete context available to resolve expressions. This is intentional to avoid divergence in the resulting assembly code. We can work around this issue by borrowing a workaround done for ARM, i.e. not directly testing the labels themselves, but by moving the current output pointer by a value that should always be zero. If this value is not null, then we will trigger a backward move, which is explicitly forbidden. Signed-off-by: Bill Wendling <morbo@google.com> [mpe: Put it in a macro and only do the workaround for clang] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201120224034.191382-4-morbo@google.com
-