- 26 May, 2020 5 commits
-
-
Christophe Leroy authored
The 8xx is about to map kernel linear space and IMMR using huge pages. In order to display those pages properly, ptdump needs to handle hugepd tables at PGD level. For the time being do it only at PGD level. Further patches may add handling of hugepd tables at lower level for other platforms when needed in the future. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/630728289158dcfeb06b14d40ed7c4c4e7148cf1.1589866984.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
In order to properly display information regardless of the page size, it is necessary to take into account real page size. Fixes: cabe8138 ("powerpc: dump as a single line areas mapping a single physical page.") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a53b2a0ffd042a8d85464bf90d55bc5b970e00a1.1589866984.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Display BAT flags the same way as page flags: rwx and wimg Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a07585f353c167b8db9597d83f992a5cb4fbf4c4.1589866984.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Display the size of areas mapped with BATs. For that, the size display for pages is refactorised. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/acf764eee231f0358e66ca9e819f052804055acc.1589866984.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
For platforms using shared.c (4xx, Book3e, Book3s/32), also handle the _PAGE_COHERENT flag which corresponds to the M bit of the WIMG flags. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Make it more verbose, use "coherent" rather than "m"] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/324c3d860717e8e91fca3bb6c0f8b23e1644a404.1589866984.git.christophe.leroy@csgroup.eu
-
- 20 May, 2020 6 commits
-
-
Christophe Leroy authored
In order to alloc sub-arches to alloc KASAN regions using optimised methods (Huge pages on 8xx, BATs on BOOK3S, ...), declare kasan_init_region() weak. Also make kasan_init_shadow_page_tables() accessible from outside, so that it can be called from the specific kasan_init_region() functions if needed. And populate remaining KASAN address space only once performed the region mapping, to allow 8xx to allocate hugepd instead of standard page tables for mapping via 8M hugepages. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3c1ce419fa1b5a4171b92d7fb16455ca17e1b96d.1589866984.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
kasan_remap_early_shadow_ro() and kasan_unmap_early_shadow_vmalloc() are both updating the early shadow mapping: the first one sets the mapping read-only while the other clears the mapping. Refactor and create kasan_update_early_region() Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8c496c0828de2608c7c940c45525d177e91b6f1b.1589866984.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Commit 45ff3c55 ("powerpc/kasan: Fix parallel loading of modules.") added spinlocks to manage parallele module loading. Since then commit 47febbee ("powerpc/32: Force KASAN_VMALLOC for modules") converted the module loading to KASAN_VMALLOC. The spinlocking has then become unneeded and can be removed to simplify kasan_init_shadow_page_tables() Also remove inclusion of linux/moduleloader.h and linux/vmalloc.h which are not needed anymore since the removal of modules management. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/81a4d3aee8b82bc1355595935c8f4ad9d3b22a83.1589866984.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Doing kasan pages allocation in MMU_init is too early, kernel doesn't have access yet to the entire memory space and memblock_alloc() fails when the kernel is a bit big. Do it from kasan_init() instead. Fixes: 2edb16ef ("powerpc/32: Add KASAN support") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c24163ee5d5f8cdf52fefa45055ceb35435b8f15.1589866984.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
At the time being, KASAN_SHADOW_END is 0x100000000, which is 0 in 32 bits representation. This leads to a couple of issues: - kasan_remap_early_shadow_ro() does nothing because the comparison k_cur < k_end is always false. - In ptdump, address comparison for markers display fails and the marker's name is printed at the start of the KASAN area instead of being printed at the end. However, there is no need to shadow the KASAN shadow area itself, so the KASAN shadow area can stop shadowing memory at the start of itself. With a PAGE_OFFSET set to 0xc0000000, KASAN shadow area is then going from 0xf8000000 to 0xff000000. Fixes: cbd18991 ("powerpc/mm: Fix an Oops in kasan_mmu_init()") Cc: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ae1a3c0d19a37410c209c3fc453634cfcc0ee318.1589866984.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
In case (k_start & PAGE_MASK) doesn't equal (kstart), 'va' will never be NULL allthough 'block' is NULL Check the return of memblock_alloc() directly instead of the resulting address in the loop. Fixes: 509cd3f2 ("powerpc/32: Simplify KASAN init") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7cb8ca82042bfc45a5cfe726c921cd7e7eeb12a3.1589866984.git.christophe.leroy@csgroup.eu
-
- 18 May, 2020 29 commits
-
-
Ravi Bangoria authored
Add support for 2nd DAWR in xmon. With this, we can have two simultaneous breakpoints from xmon. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Michael Neuling <mikey@neuling.org> Link: https://lore.kernel.org/r/20200514111741.97993-17-ravi.bangoria@linux.ibm.com
-
Ravi Bangoria authored
Xmon allows overwriting breakpoints because it's supported by only one DAWR. But with multiple DAWRs, overwriting becomes ambiguous or unnecessary complicated. So let's not allow it. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Michael Neuling <mikey@neuling.org> Link: https://lore.kernel.org/r/20200514111741.97993-16-ravi.bangoria@linux.ibm.com
-
Ravi Bangoria authored
With Book3s DAWR, ptrace and perf watchpoints on powerpc behaves differently. Ptrace watchpoint works in one-shot mode and generates signal before executing instruction. It's ptrace user's job to single-step the instruction and re-enable the watchpoint. OTOH, in case of perf watchpoint, kernel emulates/single-steps the instruction and then generates event. If perf and ptrace creates two events with same or overlapping address ranges, it's ambiguous to decide who should single-step the instruction. Because of this issue, don't allow perf and ptrace watchpoint at the same time if their address range overlaps. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Michael Neuling <mikey@neuling.org> Link: https://lore.kernel.org/r/20200514111741.97993-15-ravi.bangoria@linux.ibm.com
-
Ravi Bangoria authored
Currently we assume that we have only one watchpoint supported by hw. Get rid of that assumption and use dynamic loop instead. This should make supporting more watchpoints very easy. With more than one watchpoint, exception handler needs to know which DAWR caused the exception, and hw currently does not provide it. So we need sw logic for the same. To figure out which DAWR caused the exception, check all different combinations of user specified range, DAWR address range, actual access range and DAWRX constrains. For ex, if user specified range and actual access range overlaps but DAWRX is configured for readonly watchpoint and the instruction is store, this DAWR must not have caused exception. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Reviewed-by: Michael Neuling <mikey@neuling.org> [mpe: Unsplit multi-line printk() strings, fix some sparse warnings] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200514111741.97993-14-ravi.bangoria@linux.ibm.com
-
Ravi Bangoria authored
Currently we calculate hw aligned start and end addresses manually. Replace them with builtin ALIGN_DOWN() and ALIGN() macros. So far end_addr was inclusive but this patch makes it exclusive (by avoiding -1) for better readability. Suggested-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Michael Neuling <mikey@neuling.org> Link: https://lore.kernel.org/r/20200514111741.97993-13-ravi.bangoria@linux.ibm.com
-
Ravi Bangoria authored
Introduce is_ptrace_bp() function and move the check inside the function. It will be utilize more in later set of patches. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Michael Neuling <mikey@neuling.org> Link: https://lore.kernel.org/r/20200514111741.97993-12-ravi.bangoria@linux.ibm.com
-
Ravi Bangoria authored
ptrace_bps is already an array of size HBP_NUM_MAX. But we use hardcoded index 0 while fetching/updating it. Convert such code to loop over array. ptrace interface to use multiple watchpoint remains same. eg: two PPC_PTRACE_SETHWDEBUG calls will create two watchpoint if underneath hw supports it. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Michael Neuling <mikey@neuling.org> Link: https://lore.kernel.org/r/20200514111741.97993-11-ravi.bangoria@linux.ibm.com
-
Ravi Bangoria authored
So far powerpc hw supported only one watchpoint. But Power10 is introducing 2nd DAWR. Convert thread_struct->hw_brk into an array. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Michael Neuling <mikey@neuling.org> Link: https://lore.kernel.org/r/20200514111741.97993-10-ravi.bangoria@linux.ibm.com
-
Ravi Bangoria authored
Instead of disabling only first watchpoint, disable all available watchpoints while clearing dawr_force_enable. Callback function is used only for disabling watchpoint, rename it to disable_dawrs_cb(). And null_brk parameter is not really required while disabling watchpoint, remove it. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Michael Neuling <mikey@neuling.org> Link: https://lore.kernel.org/r/20200514111741.97993-9-ravi.bangoria@linux.ibm.com
-
Ravi Bangoria authored
Instead of disabling only one watchpoint, get num of available watchpoints dynamically and disable all of them. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Michael Neuling <mikey@neuling.org> Link: https://lore.kernel.org/r/20200514111741.97993-8-ravi.bangoria@linux.ibm.com
-
Ravi Bangoria authored
Introduce new parameter 'nr' to __set_breakpoint() which indicates which DAWR should be programed. Also convert current_brk variable to an array. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Michael Neuling <mikey@neuling.org> Link: https://lore.kernel.org/r/20200514111741.97993-7-ravi.bangoria@linux.ibm.com
-
Ravi Bangoria authored
Introduce new parameter 'nr' to set_dawr() which indicates which DAWR should be programed. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Michael Neuling <mikey@neuling.org> Link: https://lore.kernel.org/r/20200514111741.97993-6-ravi.bangoria@linux.ibm.com
-
Ravi Bangoria authored
User can ask for num of available watchpoints(dbginfo.num_data_bps) using ptrace(PPC_PTRACE_GETHWDBGINFO). Return actual number of available watchpoints on the machine rather than hardcoded 1. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Michael Neuling <mikey@neuling.org> Link: https://lore.kernel.org/r/20200514111741.97993-5-ravi.bangoria@linux.ibm.com
-
Ravi Bangoria authored
So far we had only one watchpoint, so we have hardcoded HBP_NUM to 1. But Power10 is introducing 2nd DAWR and thus kernel should be able to dynamically find actual number of watchpoints supported by hw it's running on. Introduce function for the same. Also convert HBP_NUM macro to HBP_NUM_MAX, which will now represent maximum number of watchpoints supported by Powerpc. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Michael Neuling <mikey@neuling.org> Link: https://lore.kernel.org/r/20200514111741.97993-4-ravi.bangoria@linux.ibm.com
-
Ravi Bangoria authored
Power10 is introducing second DAWR. Add SPRN_ macros for the same. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Michael Neuling <mikey@neuling.org> Link: https://lore.kernel.org/r/20200514111741.97993-3-ravi.bangoria@linux.ibm.com
-
Ravi Bangoria authored
Power10 is introducing second DAWR. Use real register names from ISA for current macros: s/SPRN_DAWR/SPRN_DAWR0/ s/SPRN_DAWRX/SPRN_DAWRX0/ Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Michael Neuling <mikey@neuling.org> Link: https://lore.kernel.org/r/20200514111741.97993-2-ravi.bangoria@linux.ibm.com
-
Jordan Niethe authored
This adds emulation support for the following prefixed Fixed-Point Arithmetic instructions: * Prefixed Add Immediate (paddi) Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Reviewed-by: Balamuruhan S <bala24@linux.ibm.com> [mpe: Squash in get_op() usage] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-31-jniethe5@gmail.com
-
Jordan Niethe authored
This adds emulation support for the following prefixed integer load/stores: * Prefixed Load Byte and Zero (plbz) * Prefixed Load Halfword and Zero (plhz) * Prefixed Load Halfword Algebraic (plha) * Prefixed Load Word and Zero (plwz) * Prefixed Load Word Algebraic (plwa) * Prefixed Load Doubleword (pld) * Prefixed Store Byte (pstb) * Prefixed Store Halfword (psth) * Prefixed Store Word (pstw) * Prefixed Store Doubleword (pstd) * Prefixed Load Quadword (plq) * Prefixed Store Quadword (pstq) the follow prefixed floating-point load/stores: * Prefixed Load Floating-Point Single (plfs) * Prefixed Load Floating-Point Double (plfd) * Prefixed Store Floating-Point Single (pstfs) * Prefixed Store Floating-Point Double (pstfd) and for the following prefixed VSX load/stores: * Prefixed Load VSX Scalar Doubleword (plxsd) * Prefixed Load VSX Scalar Single-Precision (plxssp) * Prefixed Load VSX Vector [0|1] (plxv, plxv0, plxv1) * Prefixed Store VSX Scalar Doubleword (pstxsd) * Prefixed Store VSX Scalar Single-Precision (pstxssp) * Prefixed Store VSX Vector [0|1] (pstxv, pstxv0, pstxv1) Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Reviewed-by: Balamuruhan S <bala24@linux.ibm.com> [mpe: Use CONFIG_PPC64 not __powerpc64__, use get_op()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-30-jniethe5@gmail.com
-
Jordan Niethe authored
If a prefixed instruction results in an alignment exception, the SRR1_PREFIXED bit is set. The handler attempts to emulate the responsible instruction and then increment the NIP past it. Use SRR1_PREFIXED to determine by how much the NIP should be incremented. Prefixed instructions are not permitted to cross 64-byte boundaries. If they do the alignment interrupt is invoked with SRR1 BOUNDARY bit set. If this occurs send a SIGBUS to the offending process if in user mode. If in kernel mode call bad_page_fault(). Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Alistair Popple <alistair@popple.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-29-jniethe5@gmail.com
-
Jordan Niethe authored
Do not allow inserting breakpoints on the suffix of a prefix instruction in kprobes. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-28-jniethe5@gmail.com
-
Jordan Niethe authored
Do not allow placing xmon breakpoints on the suffix of a prefix instruction. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> [mpe: Don't split printf strings across lines] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-27-jniethe5@gmail.com
-
Jordan Niethe authored
Expand the feature-fixups self-tests to includes tests for prefixed instructions. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> [mpe: Use CONFIG_PPC64 not __powerpc64__, add empty inlines] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-26-jniethe5@gmail.com
-
Jordan Niethe authored
Expand the code-patching self-tests to includes tests for patching prefixed instructions. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> [mpe: Use CONFIG_PPC64 not __powerpc64__] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-25-jniethe5@gmail.com
-
Jordan Niethe authored
For powerpc64, redefine the ppc_inst type so both word and prefixed instructions can be represented. On powerpc32 the type will remain the same. Update places which had assumed instructions to be 4 bytes long. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Reviewed-by: Alistair Popple <alistair@popple.id.au> [mpe: Rework the get_user_inst() macros to be parameterised, and don't assign to the dest if an error occurred. Use CONFIG_PPC64 not __powerpc64__ in a few places. Address other comments from Christophe. Fix some sparse complaints.] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-24-jniethe5@gmail.com
-
Jordan Niethe authored
Currently patch_imm32_load_insns() is used to load an instruction to r4 to be emulated by emulate_step(). For prefixed instructions we would like to be able to load a 64bit immediate to r4. To prepare for this make patch_imm64_load_insns() take an argument that decides which register to load an immediate to - rather than hardcoding r3. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200516115449.4168796-1-mpe@ellerman.id.au
-
Jordan Niethe authored
Add the BOUNDARY SRR1 bit definition for when the cause of an alignment exception is a prefixed instruction that crosses a 64-byte boundary. Add the PREFIXED SRR1 bit definition for exceptions caused by prefixed instructions. Bit 35 of SRR1 is called SRR1_ISI_N_OR_G. This name comes from it being used to indicate that an ISI was due to the access being no-exec or guarded. ISA v3.1 adds another purpose. It is also set if there is an access in a cache-inhibited location for prefixed instruction. Rename from SRR1_ISI_N_OR_G to SRR1_ISI_N_G_OR_CIP. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Alistair Popple <alistair@popple.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-23-jniethe5@gmail.com
-
Alistair Popple authored
Prefix instructions have their own FSCR bit which needs to enabled via a CPU feature. The kernel will save the FSCR for problem state but it needs to be enabled initially. If prefixed instructions are made unavailable by the [H]FSCR, attempting to use them will cause a facility unavailable exception. Add "PREFIX" to the facility_strings[]. Currently there are no prefixed instructions that are actually emulated by emulate_instruction() within facility_unavailable_exception(). However, when caused by a prefixed instructions the SRR1 PREFIXED bit is set. Prepare for dealing with emulated prefixed instructions by checking for this bit. Signed-off-by: Alistair Popple <alistair@popple.id.au> Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Link: https://lore.kernel.org/r/20200506034050.24806-22-jniethe5@gmail.com
-
Jordan Niethe authored
test_translate_branch() uses two pointers to instructions within a buffer, p and q, to test patch_branch(). The pointer arithmetic done on them assumes a size of 4. This will not work if the instruction length changes. Instead do the arithmetic relative to the void * to the buffer. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Alistair Popple <alistair@popple.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-21-jniethe5@gmail.com
-
Jordan Niethe authored
When a new breakpoint is created, the second instruction of that breakpoint is patched with a trap instruction. This assumes the length of the instruction is always the same. In preparation for prefixed instructions, remove this assumption. Insert the trap instruction at the same time the first instruction is inserted. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Alistair Popple <alistair@popple.id.au> Link: https://lore.kernel.org/r/20200506034050.24806-20-jniethe5@gmail.com
-