1. 18 Oct, 2021 11 commits
  2. 15 Oct, 2021 2 commits
  3. 05 Oct, 2021 4 commits
  4. 04 Oct, 2021 18 commits
  5. 01 Oct, 2021 5 commits
    • David Stevens's avatar
      KVM: x86: only allocate gfn_track when necessary · deae4a10
      David Stevens authored
      Avoid allocating the gfn_track arrays if nothing needs them. If there
      are no external to KVM users of the API (i.e. no GVT-g), then page
      tracking is only needed for shadow page tables. This means that when tdp
      is enabled and there are no external users, then the gfn_track arrays
      can be lazily allocated when the shadow MMU is actually used. This avoid
      allocations equal to .05% of guest memory when nested virtualization is
      not used, if the kernel is compiled without GVT-g.
      Signed-off-by: default avatarDavid Stevens <stevensd@chromium.org>
      Message-Id: <20210922045859.2011227-3-stevensd@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      deae4a10
    • David Stevens's avatar
      KVM: x86: add config for non-kvm users of page tracking · e9d0c0c4
      David Stevens authored
      Add a config option that allows kvm to determine whether or not there
      are any external users of page tracking.
      Signed-off-by: default avatarDavid Stevens <stevensd@chromium.org>
      Message-Id: <20210922045859.2011227-2-stevensd@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      e9d0c0c4
    • Krish Sadhukhan's avatar
      nSVM: Check for reserved encodings of TLB_CONTROL in nested VMCB · 174a921b
      Krish Sadhukhan authored
      According to section "TLB Flush" in APM vol 2,
      
          "Support for TLB_CONTROL commands other than the first two, is
           optional and is indicated by CPUID Fn8000_000A_EDX[FlushByAsid].
      
           All encodings of TLB_CONTROL not defined in the APM are reserved."
      Signed-off-by: default avatarKrish Sadhukhan <krish.sadhukhan@oracle.com>
      Message-Id: <20210920235134.101970-3-krish.sadhukhan@oracle.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      174a921b
    • Juergen Gross's avatar
      kvm: use kvfree() in kvm_arch_free_vm() · 78b497f2
      Juergen Gross authored
      By switching from kfree() to kvfree() in kvm_arch_free_vm() Arm64 can
      use the common variant. This can be accomplished by adding another
      macro __KVM_HAVE_ARCH_VM_FREE, which will be used only by x86 for now.
      
      Further simplification can be achieved by adding __kvm_arch_free_vm()
      doing the common part.
      Suggested-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarJuergen Gross <jgross@suse.com>
      Message-Id: <20210903130808.30142-5-jgross@suse.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      78b497f2
    • Babu Moger's avatar
      KVM: x86: Expose Predictive Store Forwarding Disable · b73a5432
      Babu Moger authored
      Predictive Store Forwarding: AMD Zen3 processors feature a new
      technology called Predictive Store Forwarding (PSF).
      
      PSF is a hardware-based micro-architectural optimization designed
      to improve the performance of code execution by predicting address
      dependencies between loads and stores.
      
      How PSF works:
      
      It is very common for a CPU to execute a load instruction to an address
      that was recently written by a store. Modern CPUs implement a technique
      known as Store-To-Load-Forwarding (STLF) to improve performance in such
      cases. With STLF, data from the store is forwarded directly to the load
      without having to wait for it to be written to memory. In a typical CPU,
      STLF occurs after the address of both the load and store are calculated
      and determined to match.
      
      PSF expands on this by speculating on the relationship between loads and
      stores without waiting for the address calculation to complete. With PSF,
      the CPU learns over time the relationship between loads and stores. If
      STLF typically occurs between a particular store and load, the CPU will
      remember this.
      
      In typical code, PSF provides a performance benefit by speculating on
      the load result and allowing later instructions to begin execution
      sooner than they otherwise would be able to.
      
      The details of security analysis of AMD predictive store forwarding is
      documented here.
      https://www.amd.com/system/files/documents/security-analysis-predictive-store-forwarding.pdf
      
      Predictive Store Forwarding controls:
      There are two hardware control bits which influence the PSF feature:
      - MSR 48h bit 2 – Speculative Store Bypass (SSBD)
      - MSR 48h bit 7 – Predictive Store Forwarding Disable (PSFD)
      
      The PSF feature is disabled if either of these bits are set.  These bits
      are controllable on a per-thread basis in an SMT system. By default, both
      SSBD and PSFD are 0 meaning that the speculation features are enabled.
      
      While the SSBD bit disables PSF and speculative store bypass, PSFD only
      disables PSF.
      
      PSFD may be desirable for software which is concerned with the
      speculative behavior of PSF but desires a smaller performance impact than
      setting SSBD.
      
      Support for PSFD is indicated in CPUID Fn8000_0008 EBX[28].
      All processors that support PSF will also support PSFD.
      
      Linux kernel does not have the interface to enable/disable PSFD yet. Plan
      here is to expose the PSFD technology to KVM so that the guest kernel can
      make use of it if they wish to.
      Signed-off-by: default avatarBabu Moger <Babu.Moger@amd.com>
      Message-Id: <163244601049.30292.5855870305350227855.stgit@bmoger-ubuntu>
      [Keep feature private to KVM, as requested by Borislav Petkov. - Paolo]
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      b73a5432