1. 17 Oct, 2013 40 commits
    • Aneesh Kumar K.V's avatar
      kvm: powerpc: book3s: Support building HV and PR KVM as module · 2ba9f0d8
      Aneesh Kumar K.V authored
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      [agraf: squash in compile fix]
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      2ba9f0d8
    • Aneesh Kumar K.V's avatar
    • Aneesh Kumar K.V's avatar
      kvm: powerpc: book3s: pr: move PR related tracepoints to a separate header · 72c12535
      Aneesh Kumar K.V authored
      This patch moves PR related tracepoints to a separate header. This
      enables in converting PR to a kernel module which will be done in
      later patches
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      72c12535
    • Aneesh Kumar K.V's avatar
      kvm: powerpc: book3s: Add is_hv_enabled to kvmppc_ops · 699cc876
      Aneesh Kumar K.V authored
      This help us to identify whether we are running with hypervisor mode KVM
      enabled. The change is needed so that we can have both HV and PR kvm
      enabled in the same kernel.
      
      If both HV and PR KVM are included, interrupts come in to the HV version
      of the kvmppc_interrupt code, which then jumps to the PR handler,
      renamed to kvmppc_interrupt_pr, if the guest is a PR guest.
      
      Allowing both PR and HV in the same kernel required some changes to
      kvm_dev_ioctl_check_extension(), since the values returned now can't
      be selected with #ifdefs as much as previously. We look at is_hv_enabled
      to return the right value when checking for capabilities.For capabilities that
      are only provided by HV KVM, we return the HV value only if
      is_hv_enabled is true. For capabilities provided by PR KVM but not HV,
      we return the PR value only if is_hv_enabled is false.
      
      NOTE: in later patch we replace is_hv_enabled with a static inline
      function comparing kvm_ppc_ops
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      699cc876
    • Aneesh Kumar K.V's avatar
      kvm: powerpc: book3s: Cleanup interrupt handling code · dd96b2c2
      Aneesh Kumar K.V authored
      With this patch if HV is included, interrupts come in to the HV version
      of the kvmppc_interrupt code, which then jumps to the PR handler,
      renamed to kvmppc_interrupt_pr, if the guest is a PR guest. This helps
      in enabling both HV and PR, which we do in later patch
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      dd96b2c2
    • Aneesh Kumar K.V's avatar
      kvm: powerpc: Add kvmppc_ops callback · 3a167bea
      Aneesh Kumar K.V authored
      This patch add a new callback kvmppc_ops. This will help us in enabling
      both HV and PR KVM together in the same kernel. The actual change to
      enable them together is done in the later patch in the series.
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      [agraf: squash in booke changes]
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      3a167bea
    • Aneesh Kumar K.V's avatar
      kvm: powerpc: book3s: Add a new config variable CONFIG_KVM_BOOK3S_HV_POSSIBLE · 9975f5e3
      Aneesh Kumar K.V authored
      This help ups to select the relevant code in the kernel code
      when we later move HV and PR bits as seperate modules. The patch
      also makes the config options for PR KVM selectable
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      9975f5e3
    • Aneesh Kumar K.V's avatar
      kvm: powerpc: book3s: pr: Rename KVM_BOOK3S_PR to KVM_BOOK3S_PR_POSSIBLE · 7aa79938
      Aneesh Kumar K.V authored
      With later patches supporting PR kvm as a kernel module, the changes
      that has to be built into the main kernel binary to enable PR KVM module
      is now selected via KVM_BOOK3S_PR_POSSIBLE
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      7aa79938
    • Paul Mackerras's avatar
      kvm: powerpc: book3s: move book3s_64_vio_hv.c into the main kernel binary · 066212e0
      Paul Mackerras authored
      Since the code in book3s_64_vio_hv.c is called from real mode with HV
      KVM, and therefore has to be built into the main kernel binary, this
      makes it always built-in rather than part of the KVM module.  It gets
      called from the KVM module by PR KVM, so this adds an EXPORT_SYMBOL_GPL().
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      066212e0
    • Paul Mackerras's avatar
      kvm: powerpc: book3s: remove kvmppc_handler_highmem label · 178db620
      Paul Mackerras authored
      This label is not used now.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      178db620
    • Bharat Bhushan's avatar
      KVM: PPC: E500: Add userspace debug stub support · ce11e48b
      Bharat Bhushan authored
      This patch adds the debug stub support on booke/bookehv.
      Now QEMU debug stub can use hw breakpoint, watchpoint and
      software breakpoint to debug guest.
      
      This is how we save/restore debug register context when switching
      between guest, userspace and kernel user-process:
      
      When QEMU is running
       -> thread->debug_reg == QEMU debug register context.
       -> Kernel will handle switching the debug register on context switch.
       -> no vcpu_load() called
      
      QEMU makes ioctls (except RUN)
       -> This will call vcpu_load()
       -> should not change context.
       -> Some ioctls can change vcpu debug register, context saved in vcpu->debug_regs
      
      QEMU Makes RUN ioctl
       -> Save thread->debug_reg on STACK
       -> Store thread->debug_reg == vcpu->debug_reg
       -> load thread->debug_reg
       -> RUN VCPU ( So thread points to vcpu context )
      
      Context switch happens When VCPU running
       -> makes vcpu_load() should not load any context
       -> kernel loads the vcpu context as thread->debug_regs points to vcpu context.
      
      On heavyweight_exit
       -> Load the context saved on stack in thread->debug_reg
      
      Currently we do not support debug resource emulation to guest,
      On debug exception, always exit to user space irrespective of
      user space is expecting the debug exception or not. If this is
      unexpected exception (breakpoint/watchpoint event not set by
      userspace) then let us leave the action on user space. This
      is similar to what it was before, only thing is that now we
      have proper exit state available to user space.
      Signed-off-by: default avatarBharat Bhushan <bharat.bhushan@freescale.com>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      ce11e48b
    • Bharat Bhushan's avatar
      KVM: PPC: E500: Using "struct debug_reg" · 547465ef
      Bharat Bhushan authored
      For KVM also use the "struct debug_reg" defined in asm/processor.h
      Signed-off-by: default avatarBharat Bhushan <bharat.bhushan@freescale.com>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      547465ef
    • Bharat Bhushan's avatar
      KVM: PPC: E500: exit to user space on "ehpriv 1" instruction · b12c7841
      Bharat Bhushan authored
      "ehpriv 1" instruction is used for setting software breakpoints
      by user space. This patch adds support to exit to user space
      with "run->debug" have relevant information.
      
      As this is the first point we are using run->debug, also defined
      the run->debug structure.
      Signed-off-by: default avatarBharat Bhushan <bharat.bhushan@freescale.com>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      b12c7841
    • Bharat Bhushan's avatar
      powerpc: export debug registers save function for KVM · fc82cf11
      Bharat Bhushan authored
      KVM need this function when switching from vcpu to user-space
      thread. My subsequent patch will use this function.
      Signed-off-by: default avatarBharat Bhushan <bharat.bhushan@freescale.com>
      Acked-by: default avatarMichael Neuling <mikey@neuling.org>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      fc82cf11
    • Bharat Bhushan's avatar
      powerpc: move debug registers in a structure · 95791988
      Bharat Bhushan authored
      This way we can use same data type struct with KVM and
      also help in using other debug related function.
      Signed-off-by: default avatarBharat Bhushan <bharat.bhushan@freescale.com>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      95791988
    • Bharat Bhushan's avatar
    • Bharat Bhushan's avatar
      kvm: powerpc: e500: mark page accessed when mapping a guest page · 84e4d632
      Bharat Bhushan authored
      Mark the guest page as accessed so that there is likely
      less chances of this page getting swap-out.
      Signed-off-by: default avatarBharat Bhushan <bharat.bhushan@freescale.com>
      Acked-by: default avatarScott Wood <scottwood@freescale.com>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      84e4d632
    • Bharat Bhushan's avatar
      kvm: powerpc: allow guest control "G" attribute in mas2 · ca8ccbd4
      Bharat Bhushan authored
      "G" bit in MAS2 indicates whether the page is Guarded.
      There is no reason to stop guest setting  "G", so allow him.
      Signed-off-by: default avatarBharat Bhushan <bharat.bhushan@freescale.com>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      ca8ccbd4
    • Bharat Bhushan's avatar
      kvm: powerpc: allow guest control "E" attribute in mas2 · fd75cb51
      Bharat Bhushan authored
      "E" bit in MAS2 bit indicates whether the page is accessed
      in Little-Endian or Big-Endian byte order.
      There is no reason to stop guest setting  "E", so allow him."
      Signed-off-by: default avatarBharat Bhushan <bharat.bhushan@freescale.com>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      fd75cb51
    • Bharat Bhushan's avatar
      powerpc: book3e: _PAGE_LENDIAN must be _PAGE_ENDIAN · c199efa2
      Bharat Bhushan authored
      For booke3e _PAGE_ENDIAN is not defined. Infact what is defined
      is "_PAGE_LENDIAN" which is wrong and that should be _PAGE_ENDIAN.
      There are no compilation errors as
      arch/powerpc/include/asm/pte-common.h defines _PAGE_ENDIAN to 0
      as it is not defined anywhere.
      Signed-off-by: default avatarBharat Bhushan <bharat.bhushan@freescale.com>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      c199efa2
    • Paul Mackerras's avatar
      KVM: PPC: Book3S HV: Better handling of exceptions that happen in real mode · 44a3add8
      Paul Mackerras authored
      When an interrupt or exception happens in the guest that comes to the
      host, the CPU goes to hypervisor real mode (MMU off) to handle the
      exception but doesn't change the MMU context.  After saving a few
      registers, we then clear the "in guest" flag.  If, for any reason,
      we get an exception in the real-mode code, that then gets handled
      by the normal kernel exception handlers, which turn the MMU on.  This
      is disastrous if the MMU is still set to the guest context, since we
      end up executing instructions from random places in the guest kernel
      with hypervisor privilege.
      
      In order to catch this situation, we define a new value for the "in guest"
      flag, KVM_GUEST_MODE_HOST_HV, to indicate that we are in hypervisor real
      mode with guest MMU context.  If the "in guest" flag is set to this value,
      we branch off to an emergency handler.  For the moment, this just does
      a branch to self to stop the CPU from doing anything further.
      
      While we're here, we define another new flag value to indicate that we
      are in a HV guest, as distinct from a PR guest.  This will be useful
      when we have a kernel that can support both PR and HV guests concurrently.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      44a3add8
    • Paul Mackerras's avatar
      kvm: powerpc: book3s hv: Fix vcore leak · f1378b1c
      Paul Mackerras authored
      add kvmppc_free_vcores() to free the kvmppc_vcore structures
      that we allocate for a guest, which are currently being leaked.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      f1378b1c
    • Paul Mackerras's avatar
      KVM: PPC: Book3S PR: Reduce number of shadow PTEs invalidated by MMU notifiers · 491d6ecc
      Paul Mackerras authored
      Currently, whenever any of the MMU notifier callbacks get called, we
      invalidate all the shadow PTEs.  This is inefficient because it means
      that we typically then get a lot of DSIs and ISIs in the guest to fault
      the shadow PTEs back in.  We do this even if the address range being
      notified doesn't correspond to guest memory.
      
      This commit adds code to scan the memslot array to find out what range(s)
      of guest physical addresses corresponds to the host virtual address range
      being affected.  For each such range we flush only the shadow PTEs
      for the range, on all cpus.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      491d6ecc
    • Paul Mackerras's avatar
      KVM: PPC: Book3S PR: Mark pages accessed, and dirty if being written · adc0bafe
      Paul Mackerras authored
      The mark_page_dirty() function, despite what its name might suggest,
      doesn't actually mark the page as dirty as far as the MM subsystem is
      concerned.  It merely sets a bit in KVM's map of dirty pages, if
      userspace has requested dirty tracking for the relevant memslot.
      To tell the MM subsystem that the page is dirty, we have to call
      kvm_set_pfn_dirty() (or an equivalent such as SetPageDirty()).
      
      This adds a call to kvm_set_pfn_dirty(), and while we are here, also
      adds a call to kvm_set_pfn_accessed() to tell the MM subsystem that
      the page has been accessed.  Since we are now using the pfn in
      several places, this adds a 'pfn' variable to store it and changes
      the places that used hpaddr >> PAGE_SHIFT to use pfn instead, which
      is the same thing.
      
      This also changes a use of HPTE_R_PP to PP_RXRX.  Both are 3, but
      PP_RXRX is more informative as being the read-only page permission
      bit setting.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      adc0bafe
    • Paul Mackerras's avatar
      KVM: PPC: Book3S PR: Use mmu_notifier_retry() in kvmppc_mmu_map_page() · d78bca72
      Paul Mackerras authored
      When the MM code is invalidating a range of pages, it calls the KVM
      kvm_mmu_notifier_invalidate_range_start() notifier function, which calls
      kvm_unmap_hva_range(), which arranges to flush all the existing host
      HPTEs for guest pages.  However, the Linux PTEs for the range being
      flushed are still valid at that point.  We are not supposed to establish
      any new references to pages in the range until the ...range_end()
      notifier gets called.  The PPC-specific KVM code doesn't get any
      explicit notification of that; instead, we are supposed to use
      mmu_notifier_retry() to test whether we are or have been inside a
      range flush notifier pair while we have been getting a page and
      instantiating a host HPTE for the page.
      
      This therefore adds a call to mmu_notifier_retry inside
      kvmppc_mmu_map_page().  This call is inside a region locked with
      kvm->mmu_lock, which is the same lock that is called by the KVM
      MMU notifier functions, thus ensuring that no new notification can
      proceed while we are in the locked region.  Inside this region we
      also create the host HPTE and link the corresponding hpte_cache
      structure into the lists used to find it later.  We cannot allocate
      the hpte_cache structure inside this locked region because that can
      lead to deadlock, so we allocate it outside the region and free it
      if we end up not using it.
      
      This also moves the updates of vcpu3s->hpte_cache_count inside the
      regions locked with vcpu3s->mmu_lock, and does the increment in
      kvmppc_mmu_hpte_cache_map() when the pte is added to the cache
      rather than when it is allocated, in order that the hpte_cache_count
      is accurate.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      d78bca72
    • Paul Mackerras's avatar
      KVM: PPC: Book3S PR: Better handling of host-side read-only pages · 93b159b4
      Paul Mackerras authored
      Currently we request write access to all pages that get mapped into the
      guest, even if the guest is only loading from the page.  This reduces
      the effectiveness of KSM because it means that we unshare every page we
      access.  Also, we always set the changed (C) bit in the guest HPTE if
      it allows writing, even for a guest load.
      
      This fixes both these problems.  We pass an 'iswrite' flag to the
      mmu.xlate() functions and to kvmppc_mmu_map_page() to indicate whether
      the access is a load or a store.  The mmu.xlate() functions now only
      set C for stores.  kvmppc_gfn_to_pfn() now calls gfn_to_pfn_prot()
      instead of gfn_to_pfn() so that it can indicate whether we need write
      access to the page, and get back a 'writable' flag to indicate whether
      the page is writable or not.  If that 'writable' flag is clear, we then
      make the host HPTE read-only even if the guest HPTE allowed writing.
      
      This means that we can get a protection fault when the guest writes to a
      page that it has mapped read-write but which is read-only on the host
      side (perhaps due to KSM having merged the page).  Thus we now call
      kvmppc_handle_pagefault() for protection faults as well as HPTE not found
      faults.  In kvmppc_handle_pagefault(), if the access was allowed by the
      guest HPTE and we thus need to install a new host HPTE, we then need to
      remove the old host HPTE if there is one.  This is done with a new
      function, kvmppc_mmu_unmap_page(), which uses kvmppc_mmu_pte_vflush() to
      find and remove the old host HPTE.
      
      Since the memslot-related functions require the KVM SRCU read lock to
      be held, this adds srcu_read_lock/unlock pairs around the calls to
      kvmppc_handle_pagefault().
      
      Finally, this changes kvmppc_mmu_book3s_32_xlate_pte() to not ignore
      guest HPTEs that don't permit access, and to return -EPERM for accesses
      that are not permitted by the page protections.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      93b159b4
    • Paul Mackerras's avatar
      KVM: PPC: Book3S: Move skip-interrupt handlers to common code · 4f6c11db
      Paul Mackerras authored
      Both PR and HV KVM have separate, identical copies of the
      kvmppc_skip_interrupt and kvmppc_skip_Hinterrupt handlers that are
      used for the situation where an interrupt happens when loading the
      instruction that caused an exit from the guest.  To eliminate this
      duplication and make it easier to compile in both PR and HV KVM,
      this moves this code to arch/powerpc/kernel/exceptions-64s.S along
      with other kernel interrupt handler code.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      4f6c11db
    • Paul Mackerras's avatar
      KVM: PPC: Book3S PR: Allocate kvm_vcpu structs from kvm_vcpu_cache · 3ff95502
      Paul Mackerras authored
      This makes PR KVM allocate its kvm_vcpu structs from the kvm_vcpu_cache
      rather than having them embedded in the kvmppc_vcpu_book3s struct,
      which is allocated with vzalloc.  The reason is to reduce the
      differences between PR and HV KVM in order to make is easier to have
      them coexist in one kernel binary.
      
      With this, the kvm_vcpu struct has a pointer to the kvmppc_vcpu_book3s
      struct.  The pointer to the kvmppc_book3s_shadow_vcpu struct has moved
      from the kvmppc_vcpu_book3s struct to the kvm_vcpu struct, and is only
      present for 32-bit, since it is only used for 32-bit.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      [agraf: squash in compile fix from Aneesh]
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      3ff95502
    • Paul Mackerras's avatar
      KVM: PPC: Book3S PR: Make HPT accesses and updates SMP-safe · 9308ab8e
      Paul Mackerras authored
      This adds a per-VM mutex to provide mutual exclusion between vcpus
      for accesses to and updates of the guest hashed page table (HPT).
      This also makes the code use single-byte writes to the HPT entry
      when updating of the reference (R) and change (C) bits.  The reason
      for doing this, rather than writing back the whole HPTE, is that on
      non-PAPR virtual machines, the guest OS might be writing to the HPTE
      concurrently, and writing back the whole HPTE might conflict with
      that.  Also, real hardware does single-byte writes to update R and C.
      
      The new mutex is taken in kvmppc_mmu_book3s_64_xlate() when reading
      the HPT and updating R and/or C, and in the PAPR HPT update hcalls
      (H_ENTER, H_REMOVE, etc.).  Having the mutex means that we don't need
      to use a hypervisor lock bit in the HPT update hcalls, and we don't
      need to be careful about the order in which the bytes of the HPTE are
      updated by those hcalls.
      
      The other change here is to make emulated TLB invalidations (tlbie)
      effective across all vcpus.  To do this we call kvmppc_mmu_pte_vflush
      for all vcpus in kvmppc_ppc_book3s_64_tlbie().
      
      For 32-bit, this makes the setting of the accessed and dirty bits use
      single-byte writes, and makes tlbie invalidate shadow HPTEs for all
      vcpus.
      
      With this, PR KVM can successfully run SMP guests.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      9308ab8e
    • Paul Mackerras's avatar
      KVM: PPC: Book3S PR: Correct errors in H_ENTER implementation · 5cd92a95
      Paul Mackerras authored
      The implementation of H_ENTER in PR KVM has some errors:
      
      * With H_EXACT not set, if the HPTEG is full, we return H_PTEG_FULL
        as the return value of kvmppc_h_pr_enter, but the caller is expecting
        one of the EMULATE_* values.  The H_PTEG_FULL needs to go in the
        guest's R3 instead.
      
      * With H_EXACT set, if the selected HPTE is already valid, the H_ENTER
        call should return a H_PTEG_FULL error.
      
      This fixes these errors and also makes it write only the selected HPTE,
      not the whole group, since only the selected HPTE has been modified.
      This also micro-optimizes the calculations involving pte_index and i.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      5cd92a95
    • Paul Mackerras's avatar
      KVM: PPC: Book3S PR: Handle PP0 page-protection bit in guest HPTEs · 03a9c903
      Paul Mackerras authored
      64-bit POWER processors have a three-bit field for page protection in
      the hashed page table entry (HPTE).  Currently we only interpret the two
      bits that were present in older versions of the architecture.  The only
      defined combination that has the new bit set is 110, meaning read-only
      for supervisor and no access for user mode.
      
      This adds code to kvmppc_mmu_book3s_64_xlate() to interpret the extra
      bit appropriately.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      03a9c903
    • Paul Mackerras's avatar
      KVM: PPC: Book3S PR: Use 64k host pages where possible · c9029c34
      Paul Mackerras authored
      Currently, PR KVM uses 4k pages for the host-side mappings of guest
      memory, regardless of the host page size.  When the host page size is
      64kB, we might as well use 64k host page mappings for guest mappings
      of 64kB and larger pages and for guest real-mode mappings.  However,
      the magic page has to remain a 4k page.
      
      To implement this, we first add another flag bit to the guest VSID
      values we use, to indicate that this segment is one where host pages
      should be mapped using 64k pages.  For segments with this bit set
      we set the bits in the shadow SLB entry to indicate a 64k base page
      size.  When faulting in host HPTEs for this segment, we make them
      64k HPTEs instead of 4k.  We record the pagesize in struct hpte_cache
      for use when invalidating the HPTE.
      
      For now we restrict the segment containing the magic page (if any) to
      4k pages.  It should be possible to lift this restriction in future
      by ensuring that the magic 4k page is appropriately positioned within
      a host 64k page.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      c9029c34
    • Paul Mackerras's avatar
      KVM: PPC: Book3S PR: Allow guest to use 64k pages · a4a0f252
      Paul Mackerras authored
      This adds the code to interpret 64k HPTEs in the guest hashed page
      table (HPT), 64k SLB entries, and to tell the guest about 64k pages
      in kvm_vm_ioctl_get_smmu_info().  Guest 64k pages are still shadowed
      by 4k pages.
      
      This also adds another hash table to the four we have already in
      book3s_mmu_hpte.c to allow us to find all the PTEs that we have
      instantiated that match a given 64k guest page.
      
      The tlbie instruction changed starting with POWER6 to use a bit in
      the RB operand to indicate large page invalidations, and to use other
      RB bits to indicate the base and actual page sizes and the segment
      size.  64k pages came in slightly earlier, with POWER5++.
      We use one bit in vcpu->arch.hflags to indicate that the emulated
      cpu supports 64k pages, and another to indicate that it has the new
      tlbie definition.
      
      The KVM_PPC_GET_SMMU_INFO ioctl presents a bit of a problem, because
      the MMU capabilities depend on which CPU model we're emulating, but it
      is a VM ioctl not a VCPU ioctl and therefore doesn't get passed a VCPU
      fd.  In addition, commonly-used userspace (QEMU) calls it before
      setting the PVR for any VCPU.  Therefore, as a best effort we look at
      the first vcpu in the VM and return 64k pages or not depending on its
      capabilities.  We also make the PVR default to the host PVR on recent
      CPUs that support 1TB segments (and therefore multiple page sizes as
      well) so that KVM_PPC_GET_SMMU_INFO will include 64k page and 1TB
      segment support on those CPUs.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      a4a0f252
    • Paul Mackerras's avatar
      KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu · a2d56020
      Paul Mackerras authored
      Currently PR-style KVM keeps the volatile guest register values
      (R0 - R13, CR, LR, CTR, XER, PC) in a shadow_vcpu struct rather than
      the main kvm_vcpu struct.  For 64-bit, the shadow_vcpu exists in two
      places, a kmalloc'd struct and in the PACA, and it gets copied back
      and forth in kvmppc_core_vcpu_load/put(), because the real-mode code
      can't rely on being able to access the kmalloc'd struct.
      
      This changes the code to copy the volatile values into the shadow_vcpu
      as one of the last things done before entering the guest.  Similarly
      the values are copied back out of the shadow_vcpu to the kvm_vcpu
      immediately after exiting the guest.  We arrange for interrupts to be
      still disabled at this point so that we can't get preempted on 64-bit
      and end up copying values from the wrong PACA.
      
      This means that the accessor functions in kvm_book3s.h for these
      registers are greatly simplified, and are same between PR and HV KVM.
      In places where accesses to shadow_vcpu fields are now replaced by
      accesses to the kvm_vcpu, we can also remove the svcpu_get/put pairs.
      Finally, on 64-bit, we don't need the kmalloc'd struct at all any more.
      
      With this, the time to read the PVR one million times in a loop went
      from 567.7ms to 575.5ms (averages of 6 values), an increase of about
      1.4% for this worse-case test for guest entries and exits.  The
      standard deviation of the measurements is about 11ms, so the
      difference is only marginally significant statistically.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      a2d56020
    • Paul Mackerras's avatar
      KVM: PPC: Book3S PR: Fix compilation without CONFIG_ALTIVEC · f2481771
      Paul Mackerras authored
      Commit 9d1ffdd8 ("KVM: PPC: Book3S PR: Don't corrupt guest state
      when kernel uses VMX") added a call to kvmppc_load_up_altivec() that
      isn't guarded by CONFIG_ALTIVEC, causing a link failure when building
      a kernel without CONFIG_ALTIVEC set.  This adds an #ifdef to fix this.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      f2481771
    • Paul Mackerras's avatar
      KVM: PPC: Book3S HV: Don't crash host on unknown guest interrupt · f3271d4c
      Paul Mackerras authored
      If we come out of a guest with an interrupt that we don't know about,
      instead of crashing the host with a BUG(), we now return to userspace
      with the exit reason set to KVM_EXIT_UNKNOWN and the trap vector in
      the hw.hardware_exit_reason field of the kvm_run structure, as is done
      on x86.  Note that run->exit_reason is already set to KVM_EXIT_UNKNOWN
      at the beginning of kvmppc_handle_exit().
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      f3271d4c
    • Paul Mackerras's avatar
      KVM: PPC: Book3S HV: Support POWER6 compatibility mode on POWER7 · 388cc6e1
      Paul Mackerras authored
      This enables us to use the Processor Compatibility Register (PCR) on
      POWER7 to put the processor into architecture 2.05 compatibility mode
      when running a guest.  In this mode the new instructions and registers
      that were introduced on POWER7 are disabled in user mode.  This
      includes all the VSX facilities plus several other instructions such
      as ldbrx, stdbrx, popcntw, popcntd, etc.
      
      To select this mode, we have a new register accessible through the
      set/get_one_reg interface, called KVM_REG_PPC_ARCH_COMPAT.  Setting
      this to zero gives the full set of capabilities of the processor.
      Setting it to one of the "logical" PVR values defined in PAPR puts
      the vcpu into the compatibility mode for the corresponding
      architecture level.  The supported values are:
      
      0x0f000002	Architecture 2.05 (POWER6)
      0x0f000003	Architecture 2.06 (POWER7)
      0x0f100003	Architecture 2.06+ (POWER7+)
      
      Since the PCR is per-core, the architecture compatibility level and
      the corresponding PCR value are stored in the struct kvmppc_vcore, and
      are therefore shared between all vcpus in a virtual core.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      [agraf: squash in fix to add missing break statements and documentation]
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      388cc6e1
    • Paul Mackerras's avatar
      KVM: PPC: Book3S HV: Add support for guest Program Priority Register · 4b8473c9
      Paul Mackerras authored
      POWER7 and later IBM server processors have a register called the
      Program Priority Register (PPR), which controls the priority of
      each hardware CPU SMT thread, and affects how fast it runs compared
      to other SMT threads.  This priority can be controlled by writing to
      the PPR or by use of a set of instructions of the form or rN,rN,rN
      which are otherwise no-ops but have been defined to set the priority
      to particular levels.
      
      This adds code to context switch the PPR when entering and exiting
      guests and to make the PPR value accessible through the SET/GET_ONE_REG
      interface.  When entering the guest, we set the PPR as late as
      possible, because if we are setting a low thread priority it will
      make the code run slowly from that point on.  Similarly, the
      first-level interrupt handlers save the PPR value in the PACA very
      early on, and set the thread priority to the medium level, so that
      the interrupt handling code runs at a reasonable speed.
      Acked-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      4b8473c9
    • Paul Mackerras's avatar
      KVM: PPC: Book3S HV: Store LPCR value for each virtual core · a0144e2a
      Paul Mackerras authored
      This adds the ability to have a separate LPCR (Logical Partitioning
      Control Register) value relating to a guest for each virtual core,
      rather than only having a single value for the whole VM.  This
      corresponds to what real POWER hardware does, where there is a LPCR
      per CPU thread but most of the fields are required to have the same
      value on all active threads in a core.
      
      The per-virtual-core LPCR can be read and written using the
      GET/SET_ONE_REG interface.  Userspace can can only modify the
      following fields of the LPCR value:
      
      DPFD	Default prefetch depth
      ILE	Interrupt little-endian
      TC	Translation control (secondary HPT hash group search disable)
      
      We still maintain a per-VM default LPCR value in kvm->arch.lpcr, which
      contains bits relating to memory management, i.e. the Virtualized
      Partition Memory (VPM) bits and the bits relating to guest real mode.
      When this default value is updated, the update needs to be propagated
      to the per-vcore values, so we add a kvmppc_update_lpcr() helper to do
      that.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      [agraf: fix whitespace]
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      a0144e2a
    • Paul Mackerras's avatar
      KVM: PPC: BookE: Add GET/SET_ONE_REG interface for VRSAVE · 8b75cbbe
      Paul Mackerras authored
      This makes the VRSAVE register value for a vcpu accessible through
      the GET/SET_ONE_REG interface on Book E systems (in addition to the
      existing GET/SET_SREGS interface), for consistency with Book 3S.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      8b75cbbe