1. 12 Sep, 2017 2 commits
    • Paul Mackerras's avatar
      KVM: PPC: Book3S HV: Hold kvm->lock around call to kvmppc_update_lpcr · cf5f6f31
      Paul Mackerras authored
      Commit 468808bd ("KVM: PPC: Book3S HV: Set process table for HPT
      guests on POWER9", 2017-01-30) added a call to kvmppc_update_lpcr()
      which doesn't hold the kvm->lock mutex around the call, as required.
      This adds the lock/unlock pair, and for good measure, includes
      the kvmppc_setup_partition_table() call in the locked region, since
      it is altering global state of the VM.
      
      This error appears not to have any fatal consequences for the host;
      the consequences would be that the VCPUs could end up running with
      different LPCR values, or an update to the LPCR value by userspace
      using the one_reg interface could get overwritten, or the update
      done by kvmhv_configure_mmu() could get overwritten.
      
      Cc: stable@vger.kernel.org # v4.10+
      Fixes: 468808bd ("KVM: PPC: Book3S HV: Set process table for HPT guests on POWER9")
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      cf5f6f31
    • Benjamin Herrenschmidt's avatar
      KVM: PPC: Book3S HV: Don't access XIVE PIPR register using byte accesses · d222af07
      Benjamin Herrenschmidt authored
      The XIVE interrupt controller on POWER9 machines doesn't support byte
      accesses to any register in the thread management area other than the
      CPPR (current processor priority register).  In particular, when
      reading the PIPR (pending interrupt priority register), we need to
      do a 32-bit or 64-bit load.
      
      Cc: stable@vger.kernel.org # v4.13
      Fixes: 2c4fb78f ("KVM: PPC: Book3S HV: Workaround POWER9 DD1.0 bug causing IPB bit loss")
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      d222af07
  2. 08 Sep, 2017 1 commit
  3. 07 Sep, 2017 3 commits
  4. 05 Sep, 2017 5 commits
  5. 01 Sep, 2017 1 commit
  6. 31 Aug, 2017 11 commits
  7. 30 Aug, 2017 1 commit
    • Paul Mackerras's avatar
      KVM: PPC: Book3S HV: Protect updates to spapr_tce_tables list · edd03602
      Paul Mackerras authored
      Al Viro pointed out that while one thread of a process is executing
      in kvm_vm_ioctl_create_spapr_tce(), another thread could guess the
      file descriptor returned by anon_inode_getfd() and close() it before
      the first thread has added it to the kvm->arch.spapr_tce_tables list.
      That highlights a more general problem: there is no mutual exclusion
      between writers to the spapr_tce_tables list, leading to the
      possibility of the list becoming corrupted, which could cause a
      host kernel crash.
      
      To fix the mutual exclusion problem, we add a mutex_lock/unlock
      pair around the list_del_rce in kvm_spapr_tce_release().  Also,
      this moves the call to anon_inode_getfd() inside the region
      protected by the kvm->lock mutex, after we have done the check for
      a duplicate LIOBN.  This means that if another thread does guess the
      file descriptor and closes it, its call to kvm_spapr_tce_release()
      will not do any harm because it will have to wait until the first
      thread has released kvm->lock.  With this, there are no failure
      points in kvm_vm_ioctl_create_spapr_tce() after the call to
      anon_inode_getfd().
      
      The other things that the second thread could do with the guessed
      file descriptor are to mmap it or to pass it as a parameter to a
      KVM_DEV_VFIO_GROUP_SET_SPAPR_TCE ioctl on a KVM device fd.  An mmap
      call won't cause any harm because kvm_spapr_tce_mmap() and
      kvm_spapr_tce_fault() don't access the spapr_tce_tables list or
      the kvmppc_spapr_tce_table.list field, and the fields that they do use
      have been properly initialized by the time of the anon_inode_getfd()
      call.
      
      The KVM_DEV_VFIO_GROUP_SET_SPAPR_TCE ioctl calls
      kvm_spapr_tce_attach_iommu_group(), which scans the spapr_tce_tables
      list looking for the kvmppc_spapr_tce_table struct corresponding to
      the fd given as the parameter.  Either it will find the new entry
      or it won't; if it doesn't, it just returns an error, and if it
      does, it will function normally.  So, in each case there is no
      harmful effect.
      Reviewed-by: default avatarDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      edd03602
  8. 29 Aug, 2017 5 commits
  9. 28 Aug, 2017 2 commits
  10. 25 Aug, 2017 5 commits
  11. 24 Aug, 2017 4 commits
    • Wanpeng Li's avatar
      KVM: nVMX: Fix trying to cancel vmlauch/vmresume · bfcf83b1
      Wanpeng Li authored
      ------------[ cut here ]------------
      WARNING: CPU: 7 PID: 3861 at /home/kernel/ssd/kvm/arch/x86/kvm//vmx.c:11299 nested_vmx_vmexit+0x176e/0x1980 [kvm_intel]
      CPU: 7 PID: 3861 Comm: qemu-system-x86 Tainted: G        W  OE   4.13.0-rc4+ #11
      RIP: 0010:nested_vmx_vmexit+0x176e/0x1980 [kvm_intel]
      Call Trace:
       ? kvm_multiple_exception+0x149/0x170 [kvm]
       ? handle_emulation_failure+0x79/0x230 [kvm]
       ? load_vmcs12_host_state+0xa80/0xa80 [kvm_intel]
       ? check_chain_key+0x137/0x1e0
       ? reexecute_instruction.part.168+0x130/0x130 [kvm]
       nested_vmx_inject_exception_vmexit+0xb7/0x100 [kvm_intel]
       ? nested_vmx_inject_exception_vmexit+0xb7/0x100 [kvm_intel]
       vmx_queue_exception+0x197/0x300 [kvm_intel]
       kvm_arch_vcpu_ioctl_run+0x1b0c/0x2c90 [kvm]
       ? kvm_arch_vcpu_runnable+0x220/0x220 [kvm]
       ? preempt_count_sub+0x18/0xc0
       ? restart_apic_timer+0x17d/0x300 [kvm]
       ? kvm_lapic_restart_hv_timer+0x37/0x50 [kvm]
       ? kvm_arch_vcpu_load+0x1d8/0x350 [kvm]
       kvm_vcpu_ioctl+0x4e4/0x910 [kvm]
       ? kvm_vcpu_ioctl+0x4e4/0x910 [kvm]
       ? kvm_dev_ioctl+0xbe0/0xbe0 [kvm]
      
      The flag "nested_run_pending", which can override the decision of which should run
      next, L1 or L2. nested_run_pending=1 means that we *must* run L2 next, not L1. This
      is necessary in particular when L1 did a VMLAUNCH of L2 and therefore expects L2 to
      be run (and perhaps be injected with an event it specified, etc.). Nested_run_pending
      is especially intended to avoid switching  to L1 in the injection decision-point.
      
      This can be handled just like the other cases in vmx_check_nested_events, instead of
      having a special case in vmx_queue_exception.
      
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: default avatarWanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      bfcf83b1
    • Wanpeng Li's avatar
      KVM: X86: Fix loss of exception which has not yet been injected · 664f8e26
      Wanpeng Li authored
      vmx_complete_interrupts() assumes that the exception is always injected,
      so it can be dropped by kvm_clear_exception_queue().  However,
      an exception cannot be injected immediately if it is: 1) originally
      destined to a nested guest; 2) trapped to cause a vmexit; 3) happening
      right after VMLAUNCH/VMRESUME, i.e. when nested_run_pending is true.
      
      This patch applies to exceptions the same algorithm that is used for
      NMIs, replacing exception.reinject with "exception.injected" (equivalent
      to nmi_injected).
      
      exception.pending now represents an exception that is queued and whose
      side effects (e.g., update RFLAGS.RF or DR7) have not been applied yet.
      If exception.pending is true, the exception might result in a nested
      vmexit instead, too (in which case the side effects must not be applied).
      
      exception.injected instead represents an exception that is going to be
      injected into the guest at the next vmentry.
      Reported-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: default avatarWanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      664f8e26
    • Wanpeng Li's avatar
      KVM: VMX: use kvm_event_needs_reinjection · 274bba52
      Wanpeng Li authored
      Use kvm_event_needs_reinjection() encapsulation.
      
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: default avatarWanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      274bba52
    • Paolo Bonzini's avatar
      KVM: MMU: speedup update_permission_bitmask · 09f037aa
      Paolo Bonzini authored
      update_permission_bitmask currently does a 128-iteration loop to,
      essentially, compute a constant array.  Computing the 8 bits in parallel
      reduces it to 16 iterations, and is enough to speed it up substantially
      because many boolean operations in the inner loop become constants or
      simplify noticeably.
      
      Because update_permission_bitmask is actually the top item in the profile
      for nested vmexits, this speeds up an L2->L1 vmexit by about ten thousand
      clock cycles, or up to 30%:
      
                                               before     after
         cpuid                                 35173      25954
         vmcall                                35122      27079
         inl_from_pmtimer                      52635      42675
         inl_from_qemu                         53604      44599
         inl_from_kernel                       38498      30798
         outl_to_kernel                        34508      28816
         wr_tsc_adjust_msr                     34185      26818
         rd_tsc_adjust_msr                     37409      27049
         mmio-no-eventfd:pci-mem               50563      45276
         mmio-wildcard-eventfd:pci-mem         34495      30823
         mmio-datamatch-eventfd:pci-mem        35612      31071
         portio-no-eventfd:pci-io              44925      40661
         portio-wildcard-eventfd:pci-io        29708      27269
         portio-datamatch-eventfd:pci-io       31135      27164
      
      (I wrote a small C program to compare the tables for all values of CR0.WP,
      CR4.SMAP and CR4.SMEP, and they match).
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      09f037aa