1. 15 May, 2019 5 commits
    • Paolo Bonzini's avatar
      Merge tag 'kvmarm-for-v5.2' of... · dd53f610
      Paolo Bonzini authored
      Merge tag 'kvmarm-for-v5.2' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD
      
      KVM/arm updates for 5.2
      
      - guest SVE support
      - guest Pointer Authentication support
      - Better discrimination of perf counters between host and guests
      
      Conflicts:
      	include/uapi/linux/kvm.h
      dd53f610
    • Paolo Bonzini's avatar
      Merge tag 'kvm-ppc-next-5.2-2' of... · 59c5c58c
      Paolo Bonzini authored
      Merge tag 'kvm-ppc-next-5.2-2' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc into HEAD
      
      PPC KVM update for 5.2
      
      * Support for guests to access the new POWER9 XIVE interrupt controller
        hardware directly, reducing interrupt latency and overhead for guests.
      
      * In-kernel implementation of the H_PAGE_INIT hypercall.
      
      * Reduce memory usage of sparsely-populated IOMMU tables.
      
      * Several bug fixes.
      
      Second PPC KVM update for 5.2
      
      * Fix a bug, fix a spelling mistake, remove some useless code.
      59c5c58c
    • Sean Christopherson's avatar
      Revert "KVM: nVMX: Expose RDPMC-exiting only when guest supports PMU" · f93f7ede
      Sean Christopherson authored
      The RDPMC-exiting control is dependent on the existence of the RDPMC
      instruction itself, i.e. is not tied to the "Architectural Performance
      Monitoring" feature.  For all intents and purposes, the control exists
      on all CPUs with VMX support since RDPMC also exists on all VCPUs with
      VMX supported.  Per Intel's SDM:
      
        The RDPMC instruction was introduced into the IA-32 Architecture in
        the Pentium Pro processor and the Pentium processor with MMX technology.
        The earlier Pentium processors have performance-monitoring counters, but
        they must be read with the RDMSR instruction.
      
      Because RDPMC-exiting always exists, KVM requires the control and refuses
      to load if it's not available.  As a result, hiding the PMU from a guest
      breaks nested virtualization if the guest attemts to use KVM.
      
      While it's not explicitly stated in the RDPMC pseudocode, the VM-Exit
      check for RDPMC-exiting follows standard fault vs. VM-Exit prioritization
      for privileged instructions, e.g. occurs after the CPL/CR0.PE/CR4.PCE
      checks, but before the counter referenced in ECX is checked for validity.
      
      In other words, the original KVM behavior of injecting a #GP was correct,
      and the KVM unit test needs to be adjusted accordingly, e.g. eat the #GP
      when the unit test guest (L3 in this case) executes RDPMC without
      RDPMC-exiting set in the unit test host (L2).
      
      This reverts commit e51bfdb6.
      
      Fixes: e51bfdb6 ("KVM: nVMX: Expose RDPMC-exiting only when guest supports PMU")
      Reported-by: default avatarDavid Hill <hilld@binarystorm.net>
      Cc: Saar Amar <saaramar@microsoft.com>
      Cc: Mihai Carabas <mihai.carabas@oracle.com>
      Cc: Jim Mattson <jmattson@google.com>
      Cc: Liran Alon <liran.alon@oracle.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      f93f7ede
    • Kai Huang's avatar
      kvm: x86: Fix L1TF mitigation for shadow MMU · 61455bf2
      Kai Huang authored
      Currently KVM sets 5 most significant bits of physical address bits
      reported by CPUID (boot_cpu_data.x86_phys_bits) for nonpresent or
      reserved bits SPTE to mitigate L1TF attack from guest when using shadow
      MMU. However for some particular Intel CPUs the physical address bits
      of internal cache is greater than physical address bits reported by
      CPUID.
      
      Use the kernel's existing boot_cpu_data.x86_cache_bits to determine the
      five most significant bits. Doing so improves KVM's L1TF mitigation in
      the unlikely scenario that system RAM overlaps the high order bits of
      the "real" physical address space as reported by CPUID. This aligns with
      the kernel's warnings regarding L1TF mitigation, e.g. in the above
      scenario the kernel won't warn the user about lack of L1TF mitigation
      if x86_cache_bits is greater than x86_phys_bits.
      
      Also initialize shadow_nonpresent_or_rsvd_mask explicitly to make it
      consistent with other 'shadow_{xxx}_mask', and opportunistically add a
      WARN once if KVM's L1TF mitigation cannot be applied on a system that
      is marked as being susceptible to L1TF.
      Reviewed-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: default avatarKai Huang <kai.huang@linux.intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      61455bf2
    • Sean Christopherson's avatar
      KVM: nVMX: Disable intercept for FS/GS base MSRs in vmcs02 when possible · d69129b4
      Sean Christopherson authored
      If L1 is using an MSR bitmap, unconditionally merge the MSR bitmaps from
      L0 and L1 for MSR_{KERNEL,}_{FS,GS}_BASE.  KVM unconditionally exposes
      MSRs L1.  If KVM is also running in L1 then it's highly likely L1 is
      also exposing the MSRs to L2, i.e. KVM doesn't need to intercept L2
      accesses.
      
      Based on code from Jintack Lim.
      
      Cc: Jintack Lim <jintack@xxxxxxxxxxxxxxx>
      Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@xxxxxxxxx>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d69129b4
  2. 14 May, 2019 3 commits
  3. 08 May, 2019 8 commits
  4. 01 May, 2019 1 commit
  5. 30 Apr, 2019 23 commits