1. 21 Feb, 2019 9 commits
    • Nicholas Piggin's avatar
      powerpc/64s/hash: Fix assert_slb_presence() use of the slbfee. instruction · 7104dccf
      Nicholas Piggin authored
      The slbfee. instruction must have bit 24 of RB clear, failure to do
      so can result in false negatives that result in incorrect assertions.
      
      This is not obvious from the ISA v3.0B document, which only says:
      
          The hardware ignores the contents of RB 36:38 40:63 -- p.1032
      
      This patch fixes the bug and also clears all other bits from PPC bit
      36-63, which is good practice when dealing with reserved or ignored
      bits.
      
      Fixes: e15a4fea ("powerpc/64s/hash: Add some SLB debugging tests")
      Cc: stable@vger.kernel.org # v4.20+
      Reported-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Tested-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Reviewed-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      7104dccf
    • Michael Ellerman's avatar
      powerpc/mm/hash: Increase vmalloc space to 512T with hash MMU · 3d8810e0
      Michael Ellerman authored
      This patch updates the kernel non-linear virtual map to 512TB when
      we're built with 64K page size and are using the hash MMU. We allocate
      one context for the vmalloc region and hence the max virtual area size
      is limited by the context map size (512TB for 64K and 64TB for 4K page
      size).
      
      This patch fixes boot failures with large amounts of system RAM where
      we need large vmalloc space to handle per cpu allocations.
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Tested-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      3d8810e0
    • Michael Ellerman's avatar
      powerpc/ptrace: Simplify vr_get/set() to avoid GCC warning · ca6d5149
      Michael Ellerman authored
      GCC 8 warns about the logic in vr_get/set(), which with -Werror breaks
      the build:
      
        In function ‘user_regset_copyin’,
            inlined from ‘vr_set’ at arch/powerpc/kernel/ptrace.c:628:9:
        include/linux/regset.h:295:4: error: ‘memcpy’ offset [-527, -529] is
        out of the bounds [0, 16] of object ‘vrsave’ with type ‘union
        <anonymous>’ [-Werror=array-bounds]
        arch/powerpc/kernel/ptrace.c: In function ‘vr_set’:
        arch/powerpc/kernel/ptrace.c:623:5: note: ‘vrsave’ declared here
           } vrsave;
      
      This has been identified as a regression in GCC, see GCC bug 88273.
      
      However we can avoid the warning and also simplify the logic and make
      it more robust.
      
      Currently we pass -1 as end_pos to user_regset_copyout(). This says
      "copy up to the end of the regset".
      
      The definition of the regset is:
      	[REGSET_VMX] = {
      		.core_note_type = NT_PPC_VMX, .n = 34,
      		.size = sizeof(vector128), .align = sizeof(vector128),
      		.active = vr_active, .get = vr_get, .set = vr_set
      	},
      
      The end is calculated as (n * size), ie. 34 * sizeof(vector128).
      
      In vr_get/set() we pass start_pos as 33 * sizeof(vector128), meaning
      we can copy up to sizeof(vector128) into/out-of vrsave.
      
      The on-stack vrsave is defined as:
        union {
      	  elf_vrreg_t reg;
      	  u32 word;
        } vrsave;
      
      And elf_vrreg_t is:
        typedef __vector128 elf_vrreg_t;
      
      So there is no bug, but we rely on all those sizes lining up,
      otherwise we would have a kernel stack exposure/overwrite on our
      hands.
      
      Rather than relying on that we can pass an explict end_pos based on
      the sizeof(vrsave). The result should be exactly the same but it's
      more obviously not over-reading/writing the stack and it avoids the
      compiler warning.
      Reported-by: default avatarMeelis Roos <mroos@linux.ee>
      Reported-by: default avatarMathieu Malaterre <malat@debian.org>
      Cc: stable@vger.kernel.org
      Tested-by: default avatarMathieu Malaterre <malat@debian.org>
      Tested-by: default avatarMeelis Roos <mroos@linux.ee>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      ca6d5149
    • Peter Xu's avatar
      powerpc/powernv/npu: Remove redundant change_pte() hook · 1b58a975
      Peter Xu authored
      The change_pte() notifier was designed to use as a quick path to
      update secondary MMU PTEs on write permission changes or PFN changes.
      For KVM, it could reduce the vm-exits when vcpu faults on the pages
      that was touched up by KSM. It's not used to do cache invalidations,
      for example, if we see the notifier will be called before the real PTE
      update after all (please see set_pte_at_notify that set_pte_at was
      called later).
      
      All the necessary cache invalidation should all be done in
      invalidate_range() already.
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Reviewed-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Reviewed-by: default avatarAlistair Popple <alistair@popple.id.au>
      Reviewed-by: default avatarBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      1b58a975
    • Michael Ellerman's avatar
      Merge branch 'topic/ppc-kvm' into next · e121ee6b
      Michael Ellerman authored
      Merge commits we're sharing with kvm-ppc tree.
      e121ee6b
    • Paul Mackerras's avatar
      powerpc/64s: Better printing of machine check info for guest MCEs · c0577201
      Paul Mackerras authored
      This adds an "in_guest" parameter to machine_check_print_event_info()
      so that we can avoid trying to translate guest NIP values into
      symbolic form using the host kernel's symbol table.
      Reviewed-by: default avatarAravinda Prasad <aravinda@linux.vnet.ibm.com>
      Reviewed-by: default avatarMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      c0577201
    • Paul Mackerras's avatar
      KVM: PPC: Book3S HV: Simplify machine check handling · 884dfb72
      Paul Mackerras authored
      This makes the handling of machine check interrupts that occur inside
      a guest simpler and more robust, with less done in assembler code and
      in real mode.
      
      Now, when a machine check occurs inside a guest, we always get the
      machine check event struct and put a copy in the vcpu struct for the
      vcpu where the machine check occurred.  We no longer call
      machine_check_queue_event() from kvmppc_realmode_mc_power7(), because
      on POWER8, when a vcpu is running on an offline secondary thread and
      we call machine_check_queue_event(), that calls irq_work_queue(),
      which doesn't work because the CPU is offline, but instead triggers
      the WARN_ON(lazy_irq_pending()) in pnv_smp_cpu_kill_self() (which
      fires again and again because nothing clears the condition).
      
      All that machine_check_queue_event() actually does is to cause the
      event to be printed to the console.  For a machine check occurring in
      the guest, we now print the event in kvmppc_handle_exit_hv()
      instead.
      
      The assembly code at label machine_check_realmode now just calls C
      code and then continues exiting the guest.  We no longer either
      synthesize a machine check for the guest in assembly code or return
      to the guest without a machine check.
      
      The code in kvmppc_handle_exit_hv() is extended to handle the case
      where the guest is not FWNMI-capable.  In that case we now always
      synthesize a machine check interrupt for the guest.  Previously, if
      the host thinks it has recovered the machine check fully, it would
      return to the guest without any notification that the machine check
      had occurred.  If the machine check was caused by some action of the
      guest (such as creating duplicate SLB entries), it is much better to
      tell the guest that it has caused a problem.  Therefore we now always
      generate a machine check interrupt for guests that are not
      FWNMI-capable.
      Reviewed-by: default avatarAravinda Prasad <aravinda@linux.vnet.ibm.com>
      Reviewed-by: default avatarMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      884dfb72
    • Michael Ellerman's avatar
      Merge branch 'topic/dma' into next · d0055df0
      Michael Ellerman authored
      Merge hch's big DMA rework series. This is in a topic branch in case he
      wants to merge it to minimise conflicts.
      d0055df0
    • Michael Ellerman's avatar
      KVM: PPC: Book3S HV: Context switch AMR on Power9 · d976f680
      Michael Ellerman authored
      kvmhv_p9_guest_entry() implements a fast-path guest entry for Power9
      when guest and host are both running with the Radix MMU.
      
      Currently in that path we don't save the host AMR (Authority Mask
      Register) value, and we always restore 0 on return to the host. That
      is OK at the moment because the AMR is not used for storage keys with
      the Radix MMU.
      
      However we plan to start using the AMR on Radix to prevent the kernel
      from reading/writing to userspace outside of copy_to/from_user(). In
      order to make that work we need to save/restore the AMR value.
      
      We only restore the value if it is different from the guest value,
      which is already in the register when we exit to the host. This should
      mean we rarely need to actually restore the value when running a
      modern Linux as a guest, because it will be using the same value as
      us.
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Tested-by: default avatarRussell Currey <ruscur@russell.cc>
      d976f680
  2. 19 Feb, 2019 1 commit
  3. 18 Feb, 2019 30 commits