- 16 Jun, 2021 18 commits
-
-
Christophe Leroy authored
switch_mmu_context() does things that can easily be done in C. For updating user segments, we have update_user_segments(). As mentionned in commit b5efec00 ("powerpc/32s: Move KUEP locking/unlocking in C"), update_user_segments() has the loop unrolled which is a significant performance gain. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/05c0875ad8220c03452c3a334946e207c6ca04d6.1622708530.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
In order to reuse it in switch_mmu_context(), this patch moves CTX_TO_VSID() macro into asm/book3s/32/mmu-hash.h Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/26b36ef2939234a04b37baf6ffe50cba81f5d1b7.1622708530.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
KUEP implements the update of user segment registers. Move it into mmu-hash.h in order to use it from other places. And inline kuep_lock() and kuep_unlock(). Inlining kuep_lock() is important for system_call_exception(), otherwise system_call_exception() has to save into stack the system call parameters that are used just after, and doing that takes more instructions than kuep_lock() itself. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/24591ca480d14a62ef910e38a5273d551262c4a2.1622708530.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Avoids the #ifdef in mmu.c Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/0b7a13d414837e58264edc336b89c2fe9f35f9bc.1622708530.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
PPC64 uses MMU features to enable/disable KUAP at boot time. But feature fixups are applied way too early on PPC32. But since commit c1672883 ("powerpc/32: Manage KUAP in C"), all KUAP is in C so it is now possible to use static branches. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3dca510ce555335261a47c4799167da698f569c0.1622782111.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Powerpc 44x has two bits for exec protection in TLBs: one for user (UX) and one for superviser (SX). Clear SX on user pages in TLB miss handlers to provide KUEP. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/169310e08152aa1d96c979770291d165ec6896ae.1622616032.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Since commit Fixes: 555904d0 ("powerpc/8xx: MM_SLICE is not needed anymore"), CONFIG_PPC_MMU_NOHASH_32 has not been used. Remove it. Reported-by: Tom Rix <trix@redhat.com> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/bf1e074f6fb213a1c4cc4964370bdce4b648d647.1622706812.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Use PPC_RAW_ macros to simplify the code. And use PPC_LO/PPC_HI instead of IMM_L/IMM_H which are for internal use inside ppc-opcode.h Those macros are self explanatory, comments can go as well. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5a167b8ba4d33a5c09cd504f0c862e25ffe85459.1621516826.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Now that lines can be up to 100 chars long, minimise the amount of split lines to increase readability. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8ebbd977ea8cf8d706d82458f2a21acd44562a99.1621516826.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
nip is already an unsigned long, no cast needed. op_callback_addr and emulate_step_addr are kprobe_opcode_t *. There value is obtained with ppc_kallsyms_lookup_name() which returns 'unsigned long', and there values are used create_branch() which expects 'unsigned long'. So change them to 'unsigned long' to avoid casting them back and forth. can_optimize() used p->addr several times as 'unsigned long'. Use a local 'unsigned long' variable and avoid casting multiple times. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e03192a6d4123242a275e71ce2ba0bb4d90700c1.1621516826.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
ppc_inst() ppc_inst_prefixed() ppc_inst_swab() can easily be made common to both PPC32 and PPC64. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d54c63dcac6d190e1cc0d2fe3259d6e621928cdf.1621516826.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
'struct ppc_inst' is an internal representation of an instruction, but in-memory instructions are and will remain a table of 'u32' forever. Replace all 'struct ppc_inst *' used for locating an instruction in memory by 'u32 *'. This removes a lot of undue casts to 'struct ppc_inst *'. It also helps locating ab-use of 'struct ppc_inst' dereference. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Fix ppc_inst_next(), use u32 instead of unsigned int] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7062722b087228e42cbd896e39bfdf526d6a340a.1621516826.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
'struct ppc_inst' is meant to represent an instruction internally, it is not meant to dereference code in memory. For testing code patching, use patch_instruction() to properly write into memory the code to be tested. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d8425fb42a4adebc35b7509f121817eeb02fac31.1621516826.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
instr_is_branch_to_addr() is only used in code-patching.c Make it static. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5f6b9c8c83170ed310953eac2f5b14539bfc964a.1621516826.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
'struct ppc_inst' is an internal structure to represent an instruction, it is not directly the representation of that instruction in text code. It is not meant to map and dereference code. Dereferencing code directly through 'struct ppc_inst' has two main issues: - On powerpc, structs are expected to be 8 bytes aligned while code is spread every 4 byte. - Should a non prefixed instruction lie at the end of the page and the following page not be mapped, it would generate a page fault. In-memory code must be accessed with ppc_inst_read(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c9a1201dd0a66b4a0f91f0fb46d9385cbf030feb.1621516826.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Avoid casting/dereferencing ppc_inst() as u64* , check each member of the struct when relevant. And remove the 0xff initialisation of the suffix for non prefixed instruction. An instruction with 0xff as a suffix might be invalid, but still is a prefixed instruction and has to be considered as this. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d8b155e930b7a9708ca110e8ff0ace6713a7af75.1621516826.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Remove unneeded line splits. And remove unneeded local variable initialisation. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/fb097fda78cc6852905ef00f8f7bf371b6cc66f7.1621516826.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Declare __gui_ptr as 'u32 *' instead of casting it at each use to 'unsigned int *' (which is an equivalent type). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Use u32 * instead of unsigned int *] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2c2123998e05535d08ba03a96ea1eea921d06a86.1621516826.git.christophe.leroy@csgroup.eu
-
- 15 Jun, 2021 22 commits
-
-
Christophe Leroy authored
get_user_instr() lacks sparse detection for the __user tag. This is because __gui_ptr is assigned with a cast. Fix that by adding a __chk_user_ptr() Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/0320e5b41a794fd456ab8c5993bbfadcf9e1d8b4.1621516826.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
On the road to removing all PPC_INST_xx defines in asm/ppc-opcodes.h, change PPC_INST_NOP to PPC_RAW_NOP(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ad46c195ca1b8572629ef07ba6bfe247585239a6.1621506159.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Start using PPC_RAW_xx() macros where relevant. PPC_INST_SYNC is used to both represent the 'sync' instruction and the family of synchronisation instructions. Keep it for the later, maybe we'll change the name in the future to avoid confusion. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/0945c155d6cb113431185fc1296ac127359fe29b.1621506159.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Use PPC_RAW_xxx() macros instead of open coding assembly opcodes. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Fix bad converison in do_stf_exit_barrier_fixups()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e79cd8e111ca13bf8c61a384bac365aa7e207647.1621506159.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
To increase readability, use _Rx macros instead of __REG_Rx. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/eb7ec6297b5d16f141c5866da3975b418e47431b.1621506159.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Use PPC_RAW_MFLR() instead of open coding with PPC_INST_MFLR. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c1887623e91e8b4da36e669e4c74de86320a5092.1621506159.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Use PPC_RAW_MFLR() instead of open coding with PPC_INST_MFLR. Same for PPC_INST_NOP. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/98fd4d717810b7c4032a1edf62dd6fe638e64329.1621506159.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
On the road to remove all use of PPC_INST_xxx, replace PPC_INST_BLR by PPC_RAW_BLR(). Same for PPC_INST_NOP. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c04f88d0e53d2122fbbe92226892a01ebc668b6a.1621506159.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
To improve readability, use PPC_RAW_xx() macros instead of open coding. Those macros are self-explanatory so the comments can go as well. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/99d9ee8849d3992beeadb310a665aae01c3abfb1.1621506159.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
To improve readability, use PPC_RAW_xx() macros instead of open coding. Those macros are self-explanatory so the comments can go as well. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4ca2bfdca2f47a293d05f61eb3c4e487ee170f1f.1621506159.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Instead of open coding with PPC_INST_ defines, use PPC_RAW_() macros. It improves readability. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8c92f1d9e825ee47c6f88fe43ad42d2a8cc2ab4a.1621506159.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Today we have __REG_Rx macros . They are mainly meant for internal use by macros __PPC_RA() and friends macros which allows uses like __PPC_RA(R12). When used with PPC_RAW_xx() macros, it gives a result which is not very readable. Add shorter macros _Rx in order to improve readability when used with PPC_RAW_xx() macros. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ec34d92b7c2f810622261acfeeed4b0a0f4d01bd.1621506159.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
At the time being, we have PPC_RAW_PLXVP() and PPC_RAW_PSTXVP() which provide a 64 bits value, and then it gets split by open coding to format it into a 'struct ppc_inst' instruction. Instead, define a PPC_RAW_xxx_P() and a PPC_RAW_xxx_S() to be used as is. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5d146b31b943e7ad674894421db4feef54804b9b.1621506159.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
_switch() saves and restores ALTIVEC and SPE status. For altivec this is redundant with what __switch_to() does with save_sprs() and restore_sprs() and giveup_all() before calling _switch(). Add support for SPI in save_sprs() and restore_sprs() and remove things from _switch(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8ab21fd93d6e0047aa71e6509e5e312f14b2991b.1620998075.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Commit 328e7e48 ("powerpc: force inlining of csum_partial() to avoid multiple csum_partial() with GCC10") inlined csum_partial(). Now that csum_partial() is inlined, GCC outlines csum_add() when called by csum_partial(). c064fb28 <csum_add>: c064fb28: 7c 63 20 14 addc r3,r3,r4 c064fb2c: 7c 63 01 94 addze r3,r3 c064fb30: 4e 80 00 20 blr c0665fb8 <csum_add>: c0665fb8: 7c 63 20 14 addc r3,r3,r4 c0665fbc: 7c 63 01 94 addze r3,r3 c0665fc0: 4e 80 00 20 blr c066719c: 7c 9a c0 2e lwzx r4,r26,r24 c06671a0: 38 60 00 00 li r3,0 c06671a4: 7f 1a c2 14 add r24,r26,r24 c06671a8: 4b ff ee 11 bl c0665fb8 <csum_add> c06671ac: 80 98 00 04 lwz r4,4(r24) c06671b0: 4b ff ee 09 bl c0665fb8 <csum_add> c06671b4: 80 98 00 08 lwz r4,8(r24) c06671b8: 4b ff ee 01 bl c0665fb8 <csum_add> c06671bc: a0 98 00 0c lhz r4,12(r24) c06671c0: 4b ff ed f9 bl c0665fb8 <csum_add> c06671c4: 7c 63 18 f8 not r3,r3 c06671c8: 81 3f 00 68 lwz r9,104(r31) c06671cc: 81 5f 00 a0 lwz r10,160(r31) c06671d0: 7d 29 18 14 addc r9,r9,r3 c06671d4: 7d 29 01 94 addze r9,r9 c06671d8: 91 3f 00 68 stw r9,104(r31) c06671dc: 7d 1a 50 50 subf r8,r26,r10 c06671e0: 83 01 00 10 lwz r24,16(r1) c06671e4: 83 41 00 18 lwz r26,24(r1) The sum with 0 is useless, should have been skipped. And there is even one completely unused instance of csum_add(). In file included from ./include/net/checksum.h:22, from ./include/linux/skbuff.h:28, from ./include/linux/icmp.h:16, from net/ipv6/ip6_tunnel.c:23: ./arch/powerpc/include/asm/checksum.h: In function '__ip6_tnl_rcv': ./arch/powerpc/include/asm/checksum.h:94:22: warning: inlining failed in call to 'csum_add': call is unlikely and code size would grow [-Winline] 94 | static inline __wsum csum_add(__wsum csum, __wsum addend) | ^~~~~~~~ ./arch/powerpc/include/asm/checksum.h:172:31: note: called from here 172 | sum = csum_add(sum, (__force __wsum)*(const u32 *)buff); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ./arch/powerpc/include/asm/checksum.h:94:22: warning: inlining failed in call to 'csum_add': call is unlikely and code size would grow [-Winline] 94 | static inline __wsum csum_add(__wsum csum, __wsum addend) | ^~~~~~~~ ./arch/powerpc/include/asm/checksum.h:177:31: note: called from here 177 | sum = csum_add(sum, (__force __wsum) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 178 | *(const u32 *)(buff + 4)); | ~~~~~~~~~~~~~~~~~~~~~~~~~ ./arch/powerpc/include/asm/checksum.h:94:22: warning: inlining failed in call to 'csum_add': call is unlikely and code size would grow [-Winline] 94 | static inline __wsum csum_add(__wsum csum, __wsum addend) | ^~~~~~~~ ./arch/powerpc/include/asm/checksum.h:183:31: note: called from here 183 | sum = csum_add(sum, (__force __wsum) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 184 | *(const u32 *)(buff + 8)); | ~~~~~~~~~~~~~~~~~~~~~~~~~ ./arch/powerpc/include/asm/checksum.h:94:22: warning: inlining failed in call to 'csum_add': call is unlikely and code size would grow [-Winline] 94 | static inline __wsum csum_add(__wsum csum, __wsum addend) | ^~~~~~~~ ./arch/powerpc/include/asm/checksum.h:186:31: note: called from here 186 | sum = csum_add(sum, (__force __wsum) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 187 | *(const u16 *)(buff + 12)); | ~~~~~~~~~~~~~~~~~~~~~~~~~~ Force inlining of csum_add(). 94c: 80 df 00 a0 lwz r6,160(r31) 950: 7d 28 50 2e lwzx r9,r8,r10 954: 7d 48 52 14 add r10,r8,r10 958: 80 aa 00 04 lwz r5,4(r10) 95c: 80 ff 00 68 lwz r7,104(r31) 960: 7d 29 28 14 addc r9,r9,r5 964: 7d 29 01 94 addze r9,r9 968: 7d 08 30 50 subf r8,r8,r6 96c: 80 aa 00 08 lwz r5,8(r10) 970: a1 4a 00 0c lhz r10,12(r10) 974: 7d 29 28 14 addc r9,r9,r5 978: 7d 29 01 94 addze r9,r9 97c: 7d 29 50 14 addc r9,r9,r10 980: 7d 29 01 94 addze r9,r9 984: 7d 29 48 f8 not r9,r9 988: 7c e7 48 14 addc r7,r7,r9 98c: 7c e7 01 94 addze r7,r7 990: 90 ff 00 68 stw r7,104(r31) In the non-inlined version, the first sum with 0 was performed. Here it is skipped. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f7f4d4e364de6e473da874468b903da6e5d97adc.1620713272.git.christophe.leroy@csgroup.eu
-
Michael Ellerman authored
Merge our fixes branch which has a number of important fixes, notably the fix for initrd corruption, as well as the fixes for scv vs ptrace.
-
Finn Thain authored
This avoids an (optional) compiler warning: arch/powerpc/kernel/tau_6xx.c: In function 'TAU_init': arch/powerpc/kernel/tau_6xx.c:204:30: error: too many arguments for format [-Werror=format-extra-args] tau_workq = alloc_workqueue("tau", WQ_UNBOUND, 1, 0); Fixes: b1c6a0a1 ("powerpc/tau: Convert from timer to workqueue") Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: Finn Thain <fthain@linux-m68k.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a1456e8bbd33ef702e3ff6f14b1bf3919241c62b.1623398307.git.fthain@linux-m68k.org
-
Michael Ellerman authored
Commit b0b3b2c7 ("powerpc: Switch to relative jump labels") switched us to using relative jump labels. That involves changing the code, target and key members in struct jump_entry to be relative to the address of the jump_entry, rather than absolute addresses. We have two static inlines that create a struct jump_entry, arch_static_branch() and arch_static_branch_jump(), as well as an asm macro ARCH_STATIC_BRANCH, which is used by the pseries-only hypervisor tracing code. Unfortunately we missed updating the key to be a relative reference in ARCH_STATIC_BRANCH. That causes a pseries kernel to have a handful of jump_entry structs with bad key values. Instead of being a relative reference they instead hold the full address of the key. However the code doesn't expect that, it still adds the key value to the address of the jump_entry (see jump_entry_key()) expecting to get a pointer to a key somewhere in kernel data. The table of jump_entry structs sits in rodata, which comes after the kernel text. In a typical build this will be somewhere around 15MB. The address of the key will be somewhere in data, typically around 20MB. Adding the two values together gets us a pointer somewhere around 45MB. We then call static_key_set_entries() with that bad pointer and modify some members of the struct static_key we think we are pointing at. A pseries kernel is typically ~30MB in size, so writing to ~45MB won't corrupt the kernel itself. However if we're booting with an initrd, depending on the size and exact location of the initrd, we can corrupt the initrd. Depending on how exactly we corrupt the initrd it can either cause the system to not boot, or just corrupt one of the files in the initrd. The fix is simply to make the key value relative to the jump_entry struct in the ARCH_STATIC_BRANCH macro. Fixes: b0b3b2c7 ("powerpc: Switch to relative jump labels") Reported-by: Anastasia Kovaleva <a.kovaleva@yadro.com> Reported-by: Roman Bolshakov <r.bolshakov@yadro.com> Reported-by: Greg Kurz <groug@kaod.org> Reported-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Tested-by: Daniel Axtens <dja@axtens.net> Tested-by: Greg Kurz <groug@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210614131440.312360-1-mpe@ellerman.id.au
-
Christophe Leroy authored
arch/powerpc/Kbuild decend into arch/powerpc/perf/ only when CONFIG_PERF_EVENTS is selected, so there is not need to take CONFIG_PERF_EVENTS into account in arch/powerpc/perf/Makefile. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Michal Suchánek <msuchanek@suse.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d37f61afca55b5b33787b643890e061ae1c18f5f.1620396045.git.christophe.leroy@csgroup.eu
-
Andy Shevchenko authored
If by some reason any of the headers will include ctype.h we will have a name collision. Avoid this by moving isspace() to the dedicate namespace. First appearance of the code is in the commit cf68787b ("powerpc/prom_init: Evaluate mem kernel parameter for early allocation"). Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> [mpe: Reformat prom_isxdigit() now that we allow longer lines] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210510144925.58195-1-andriy.shevchenko@linux.intel.com
-
Shaokun Zhang authored
Function 'event_ebb_init' and 'event_leader_ebb_init' are declared twice in the header file, so remove the repeated declaration. Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1622529385-5938-1-git-send-email-zhangshaokun@hisilicon.com
-
Baokun Li authored
Fixes gcc '-Wunused-but-set-variable' warning: # WARNING: Fixes tag on line 3 doesn't match correct format # WARNING: Fixes tag on line 3 doesn't match correct format # WARNING: Fixes tag on line 3 doesn't match correct format # WARNING: Fixes tag on line 3 doesn't match correct format # WARNING: Fixes tag on line 3 doesn't match correct format # WARNING: Fixes tag on line 3 doesn't match correct format arch/powerpc/platforms/cell/spider-pci.c: In function 'spiderpci_io_flush': arch/powerpc/platforms/cell/spider-pci.c:28:6: warning: variable ‘val’ set but not used [-Wunused-but-set-variable] It never used since introduction. Signed-off-by: Baokun Li <libaokun1@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210601085319.140461-1-libaokun1@huawei.com
-