1. 28 Jul, 2022 23 commits
    • Paolo Bonzini's avatar
      Revert "KVM: nVMX: Do not expose MPX VMX controls when guest MPX disabled" · 8805875a
      Paolo Bonzini authored
      Since commit 5f76f6f5 ("KVM: nVMX: Do not expose MPX VMX controls
      when guest MPX disabled"), KVM has taken ownership of the "load
      IA32_BNDCFGS" and "clear IA32_BNDCFGS" VMX entry/exit controls,
      trying to set these bits in the IA32_VMX_TRUE_{ENTRY,EXIT}_CTLS
      MSRs if the guest's CPUID supports MPX, and clear otherwise.
      
      The intent of the patch was to apply it to L0 in order to work around
      L1 kernels that lack the fix in commit 691bd434 ("kvm: vmx: allow
      host to access guest MSR_IA32_BNDCFGS", 2017-07-04): by hiding the
      control bits from L0, L1 hides BNDCFGS from KVM_GET_MSR_INDEX_LIST,
      and the L1 bug is neutralized even in the lack of commit 691bd434.
      
      This was perhaps a sensible kludge at the time, but a horrible
      idea in the long term and in fact it has not been extended to
      other CPUID bits like these:
      
        X86_FEATURE_LM => VM_EXIT_HOST_ADDR_SPACE_SIZE, VM_ENTRY_IA32E_MODE,
                          VMX_MISC_SAVE_EFER_LMA
      
        X86_FEATURE_TSC => CPU_BASED_RDTSC_EXITING, CPU_BASED_USE_TSC_OFFSETTING,
                           SECONDARY_EXEC_TSC_SCALING
      
        X86_FEATURE_INVPCID_SINGLE => SECONDARY_EXEC_ENABLE_INVPCID
      
        X86_FEATURE_MWAIT => CPU_BASED_MONITOR_EXITING, CPU_BASED_MWAIT_EXITING
      
        X86_FEATURE_INTEL_PT => SECONDARY_EXEC_PT_CONCEAL_VMX, SECONDARY_EXEC_PT_USE_GPA,
                                VM_EXIT_CLEAR_IA32_RTIT_CTL, VM_ENTRY_LOAD_IA32_RTIT_CTL
      
        X86_FEATURE_XSAVES => SECONDARY_EXEC_XSAVES
      
      These days it's sort of common knowledge that any MSR in
      KVM_GET_MSR_INDEX_LIST must allow *at least* setting it with KVM_SET_MSR
      to a default value, so it is unlikely that something like commit
      5f76f6f5 will be needed again.  So revert it, at the potential cost
      of breaking L1s with a 6 year old kernel.  While in principle the L0 owner
      doesn't control what runs on L1, such an old hypervisor would probably
      have many other bugs.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      8805875a
    • Sean Christopherson's avatar
      KVM: nVMX: Let userspace set nVMX MSR to any _host_ supported value · f8ae08f9
      Sean Christopherson authored
      Restrict the nVMX MSRs based on KVM's config, not based on the guest's
      current config.  Using the guest's config to audit the new config
      prevents userspace from restoring the original config (KVM's config) if
      at any point in the past the guest's config was restricted in any way.
      
      Fixes: 62cc6b9d ("KVM: nVMX: support restore of VMX capability MSRs")
      Cc: stable@vger.kernel.org
      Cc: David Matlack <dmatlack@google.com>
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20220607213604.3346000-6-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      f8ae08f9
    • Sean Christopherson's avatar
      KVM: nVMX: Rename handle_vm{on,off}() to handle_vmx{on,off}() · a645c2b5
      Sean Christopherson authored
      Rename the exit handlers for VMXON and VMXOFF to match the instruction
      names, the terms "vmon" and "vmoff" are not used anywhere in Intel's
      documentation, nor are they used elsehwere in KVM.
      
      Sadly, the exit reasons are exposed to userspace and so cannot be renamed
      without breaking userspace. :-(
      
      Fixes: ec378aee ("KVM: nVMX: Implement VMXON and VMXOFF")
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20220607213604.3346000-5-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      a645c2b5
    • Sean Christopherson's avatar
      KVM: nVMX: Inject #UD if VMXON is attempted with incompatible CR0/CR4 · c7d855c2
      Sean Christopherson authored
      Inject a #UD if L1 attempts VMXON with a CR0 or CR4 that is disallowed
      per the associated nested VMX MSRs' fixed0/1 settings.  KVM cannot rely
      on hardware to perform the checks, even for the few checks that have
      higher priority than VM-Exit, as (a) KVM may have forced CR0/CR4 bits in
      hardware while running the guest, (b) there may incompatible CR0/CR4 bits
      that have lower priority than VM-Exit, e.g. CR0.NE, and (c) userspace may
      have further restricted the allowed CR0/CR4 values by manipulating the
      guest's nested VMX MSRs.
      
      Note, despite a very strong desire to throw shade at Jim, commit
      70f3aac9 ("kvm: nVMX: Remove superfluous VMX instruction fault checks")
      is not to blame for the buggy behavior (though the comment...).  That
      commit only removed the CR0.PE, EFLAGS.VM, and COMPATIBILITY mode checks
      (though it did erroneously drop the CPL check, but that has already been
      remedied).  KVM may force CR0.PE=1, but will do so only when also
      forcing EFLAGS.VM=1 to emulate Real Mode, i.e. hardware will still #UD.
      
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=216033
      Fixes: ec378aee ("KVM: nVMX: Implement VMXON and VMXOFF")
      Reported-by: default avatarEric Li <ercli@ucdavis.edu>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20220607213604.3346000-4-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      c7d855c2
    • Sean Christopherson's avatar
      KVM: nVMX: Account for KVM reserved CR4 bits in consistency checks · ca58f3aa
      Sean Christopherson authored
      Check that the guest (L2) and host (L1) CR4 values that would be loaded
      by nested VM-Enter and VM-Exit respectively are valid with respect to
      KVM's (L0 host) allowed CR4 bits.  Failure to check KVM reserved bits
      would allow L1 to load an illegal CR4 (or trigger hardware VM-Fail or
      failed VM-Entry) by massaging guest CPUID to allow features that are not
      supported by KVM.  Amusingly, KVM itself is an accomplice in its doom, as
      KVM adjusts L1's MSR_IA32_VMX_CR4_FIXED1 to allow L1 to enable bits for
      L2 based on L1's CPUID model.
      
      Note, although nested_{guest,host}_cr4_valid() are _currently_ used if
      and only if the vCPU is post-VMXON (nested.vmxon == true), that may not
      be true in the future, e.g. emulating VMXON has a bug where it doesn't
      check the allowed/required CR0/CR4 bits.
      
      Cc: stable@vger.kernel.org
      Fixes: 3899152c ("KVM: nVMX: fix checks on CR{0,4} during virtual VMX operation")
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20220607213604.3346000-3-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      ca58f3aa
    • Sean Christopherson's avatar
      KVM: x86: Split kvm_is_valid_cr4() and export only the non-vendor bits · c33f6f22
      Sean Christopherson authored
      Split the common x86 parts of kvm_is_valid_cr4(), i.e. the reserved bits
      checks, into a separate helper, __kvm_is_valid_cr4(), and export only the
      inner helper to vendor code in order to prevent nested VMX from calling
      back into vmx_is_valid_cr4() via kvm_is_valid_cr4().
      
      On SVM, this is a nop as SVM doesn't place any additional restrictions on
      CR4.
      
      On VMX, this is also currently a nop, but only because nested VMX is
      missing checks on reserved CR4 bits for nested VM-Enter.  That bug will
      be fixed in a future patch, and could simply use kvm_is_valid_cr4() as-is,
      but nVMX has _another_ bug where VMXON emulation doesn't enforce VMX's
      restrictions on CR0/CR4.  The cleanest and most intuitive way to fix the
      VMXON bug is to use nested_host_cr{0,4}_valid().  If the CR4 variant
      routes through kvm_is_valid_cr4(), using nested_host_cr4_valid() won't do
      the right thing for the VMXON case as vmx_is_valid_cr4() enforces VMX's
      restrictions if and only if the vCPU is post-VMXON.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20220607213604.3346000-2-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      c33f6f22
    • Sean Christopherson's avatar
      KVM: selftests: Add an option to run vCPUs while disabling dirty logging · cfe12e64
      Sean Christopherson authored
      Add a command line option to dirty_log_perf_test to run vCPUs for the
      entire duration of disabling dirty logging.  By default, the test stops
      running runs vCPUs before disabling dirty logging, which is faster but
      less interesting as it doesn't stress KVM's handling of contention
      between page faults and the zapping of collapsible SPTEs.  Enabling the
      flag also lets the user verify that KVM is indeed rebuilding zapped SPTEs
      as huge pages by checking KVM's pages_{1g,2m,4k} stats.  Without vCPUs to
      fault in the zapped SPTEs, the stats will show that KVM is zapping pages,
      but they never show whether or not KVM actually allows huge pages to be
      recreated.
      
      Note!  Enabling the flag can _significantly_ increase runtime, especially
      if the thread that's disabling dirty logging doesn't have a dedicated
      pCPU, e.g. if all pCPUs are used to run vCPUs.
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20220715232107.3775620-5-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      cfe12e64
    • Sean Christopherson's avatar
      KVM: x86/mmu: Don't bottom out on leafs when zapping collapsible SPTEs · 85f44f8c
      Sean Christopherson authored
      When zapping collapsible SPTEs in the TDP MMU, don't bottom out on a leaf
      SPTE now that KVM doesn't require a PFN to compute the host mapping level,
      i.e. now that there's no need to first find a leaf SPTE and then step
      back up.
      
      Drop the now unused tdp_iter_step_up(), as it is not the safest of
      helpers (using any of the low level iterators requires some understanding
      of the various side effects).
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20220715232107.3775620-4-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      85f44f8c
    • Sean Christopherson's avatar
      KVM: x86/mmu: Document the "rules" for using host_pfn_mapping_level() · 65e3b446
      Sean Christopherson authored
      Add a comment to document how host_pfn_mapping_level() can be used safely,
      as the line between safe and dangerous is quite thin.  E.g. if KVM were
      to ever support in-place promotion to create huge pages, consuming the
      level is safe if the caller holds mmu_lock and checks that there's an
      existing _leaf_ SPTE, but unsafe if the caller only checks that there's a
      non-leaf SPTE.
      
      Opportunistically tweak the existing comments to explicitly document why
      KVM needs to use READ_ONCE().
      
      No functional change intended.
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20220715232107.3775620-3-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      65e3b446
    • Sean Christopherson's avatar
      KVM: x86/mmu: Don't require refcounted "struct page" to create huge SPTEs · a8ac499b
      Sean Christopherson authored
      Drop the requirement that a pfn be backed by a refcounted, compound or
      or ZONE_DEVICE, struct page, and instead rely solely on the host page
      tables to identify huge pages.  The PageCompound() check is a remnant of
      an old implementation that identified (well, attempt to identify) huge
      pages without walking the host page tables.  The ZONE_DEVICE check was
      added as an exception to the PageCompound() requirement.  In other words,
      neither check is actually a hard requirement, if the primary has a pfn
      backed with a huge page, then KVM can back the pfn with a huge page
      regardless of the backing store.
      
      Dropping the @pfn parameter will also allow KVM to query the max host
      mapping level without having to first get the pfn, which is advantageous
      for use outside of the page fault path where KVM wants to take action if
      and only if a page can be mapped huge, i.e. avoids the pfn lookup for
      gfns that can't be backed with a huge page.
      
      Cc: Mingwei Zhang <mizhang@google.com>
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Reviewed-by: default avatarMingwei Zhang <mizhang@google.com>
      Message-Id: <20220715232107.3775620-2-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      a8ac499b
    • Sean Christopherson's avatar
      KVM: x86/mmu: Restrict mapping level based on guest MTRR iff they're used · d5e90a69
      Sean Christopherson authored
      Restrict the mapping level for SPTEs based on the guest MTRRs if and only
      if KVM may actually use the guest MTRRs to compute the "real" memtype.
      For all forms of paging, guest MTRRs are purely virtual in the sense that
      they are completely ignored by hardware, i.e. they affect the memtype
      only if software manually consumes them.  The only scenario where KVM
      consumes the guest MTRRs is when shadow_memtype_mask is non-zero and the
      guest has non-coherent DMA, in all other cases KVM simply leaves the PAT
      field in SPTEs as '0' to encode WB memtype.
      
      Note, KVM may still ultimately ignore guest MTRRs, e.g. if the backing
      pfn is host MMIO, but false positives are ok as they only cause a slight
      performance blip (unless the guest is doing weird things with its MTRRs,
      which is extremely unlikely).
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Reviewed-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
      Message-Id: <20220715230016.3762909-5-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d5e90a69
    • Sean Christopherson's avatar
      KVM: x86/mmu: Add shadow mask for effective host MTRR memtype · 38bf9d7b
      Sean Christopherson authored
      Add shadow_memtype_mask to capture that EPT needs a non-zero memtype mask
      instead of relying on TDP being enabled, as NPT doesn't need a non-zero
      mask.  This is a glorified nop as kvm_x86_ops.get_mt_mask() returns zero
      for NPT anyways.
      
      No functional change intended.
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Reviewed-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
      Message-Id: <20220715230016.3762909-4-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      38bf9d7b
    • Sean Christopherson's avatar
      KVM: x86: Drop unnecessary goto+label in kvm_arch_init() · 82ffad2d
      Sean Christopherson authored
      Return directly if kvm_arch_init() detects an error before doing any real
      work, jumping through a label obfuscates what's happening and carries the
      unnecessary risk of leaving 'r' uninitialized.
      
      No functional change intended.
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Reviewed-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
      Message-Id: <20220715230016.3762909-3-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      82ffad2d
    • Sean Christopherson's avatar
      KVM: x86: Reject loading KVM if host.PAT[0] != WB · 94bda2f4
      Sean Christopherson authored
      Reject KVM if entry '0' in the host's IA32_PAT MSR is not programmed to
      writeback (WB) memtype.  KVM subtly relies on IA32_PAT entry '0' to be
      programmed to WB by leaving the PAT bits in shadow paging and NPT SPTEs
      as '0'.  If something other than WB is in PAT[0], at _best_ guests will
      suffer very poor performance, and at worst KVM will crash the system by
      breaking cache-coherency expecations (e.g. using WC for guest memory).
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Reviewed-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
      Message-Id: <20220715230016.3762909-2-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      94bda2f4
    • Suravee Suthikulpanit's avatar
      KVM: SVM: Fix x2APIC MSRs interception · 01e69cef
      Suravee Suthikulpanit authored
      The index for svm_direct_access_msrs was incorrectly initialized with
      the APIC MMIO register macros. Fix by introducing a macro for calculating
      x2APIC MSRs.
      
      Fixes: 5c127c85 ("KVM: SVM: Adding support for configuring x2APIC MSRs interception")
      Cc: Maxim Levitsky <mlevitsk@redhat.com>
      Signed-off-by: default avatarSuravee Suthikulpanit <suravee.suthikulpanit@amd.com>
      Message-Id: <20220718083833.222117-1-suravee.suthikulpanit@amd.com>
      Reviewed-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      01e69cef
    • Sean Christopherson's avatar
      KVM: x86/mmu: Remove underscores from __pte_list_remove() · 3c2e1037
      Sean Christopherson authored
      Remove the underscores from __pte_list_remove(), the function formerly
      known as pte_list_remove() is now named kvm_zap_one_rmap_spte() to show
      that it zaps rmaps/PTEs, i.e. doesn't just remove an entry from a list.
      
      No functional change intended.
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20220715224226.3749507-8-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      3c2e1037
    • Sean Christopherson's avatar
      KVM: x86/mmu: Rename pte_list_{destroy,remove}() to show they zap SPTEs · 9202aee8
      Sean Christopherson authored
      Rename pte_list_remove() and pte_list_destroy() to kvm_zap_one_rmap_spte()
      and kvm_zap_all_rmap_sptes() respectively to document that (a) they zap
      SPTEs and (b) to better document how they differ (remove vs. destroy does
      not exactly scream "one vs. all").
      
      No functional change intended.
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20220715224226.3749507-7-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      9202aee8
    • Sean Christopherson's avatar
      KVM: x86/mmu: Rename rmap zap helpers to eliminate "unmap" wrapper · f8480721
      Sean Christopherson authored
      Rename kvm_unmap_rmap() and kvm_zap_rmap() to kvm_zap_rmap() and
      __kvm_zap_rmap() respectively to show that what was the "unmap" helper is
      just a wrapper for the "zap" helper, i.e. that they do the exact same
      thing, one just exists to deal with its caller passing in more params.
      
      No functional change intended.
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20220715224226.3749507-6-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      f8480721
    • Sean Christopherson's avatar
      KVM: x86/mmu: Rename __kvm_zap_rmaps() to align with other nomenclature · 2833eda0
      Sean Christopherson authored
      Rename __kvm_zap_rmaps() to kvm_rmap_zap_gfn_range() to avoid future
      confusion with a soon-to-be-introduced __kvm_zap_rmap().  Using a plural
      "rmaps" is somewhat ambiguous without additional context, as it's not
      obvious whether it's referring to multiple rmap lists, versus multiple
      rmap entries within a single list.
      
      Use kvm_rmap_zap_gfn_range() to align with the pattern established by
      kvm_rmap_zap_collapsible_sptes(), without losing the information that it
      zaps only rmap-based MMUs, i.e. don't rename it to __kvm_zap_gfn_range().
      
      No functional change intended.
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20220715224226.3749507-5-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      2833eda0
    • Sean Christopherson's avatar
      KVM: x86/mmu: Drop the "p is for pointer" from rmap helpers · aed02fe3
      Sean Christopherson authored
      Drop the trailing "p" from rmap helpers, i.e. rename functions to simply
      be kvm_<action>_rmap().  Declaring that a function takes a pointer is
      completely unnecessary and goes against kernel style.
      
      No functional change intended.
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20220715224226.3749507-4-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      aed02fe3
    • Sean Christopherson's avatar
      KVM: x86/mmu: Directly "destroy" PTE list when recycling rmaps · a42989e7
      Sean Christopherson authored
      Use pte_list_destroy() directly when recycling rmaps instead of bouncing
      through kvm_unmap_rmapp() and kvm_zap_rmapp().  Calling kvm_unmap_rmapp()
      is unnecessary and odd as it requires passing dummy parameters; passing
      NULL for @slot when __rmap_add() already has a valid slot is especially
      weird and confusing.
      
      No functional change intended.
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20220715224226.3749507-3-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      a42989e7
    • Sean Christopherson's avatar
      KVM: x86/mmu: Return a u64 (the old SPTE) from mmu_spte_clear_track_bits() · 35d539c3
      Sean Christopherson authored
      Return a u64, not an int, from mmu_spte_clear_track_bits().  The return
      value is the old SPTE value, which is very much a 64-bit value.  The sole
      caller that consumes the return value, drop_spte(), already uses a u64.
      The only reason that truncating the SPTE value is not problematic is
      because drop_spte() only queries the shadow-present bit, which is in the
      lower 32 bits.
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20220715224226.3749507-2-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      35d539c3
    • Maciej S. Szmigiero's avatar
      KVM: nSVM: Pull CS.Base from actual VMCB12 for soft int/ex re-injection · da0b93d6
      Maciej S. Szmigiero authored
      enter_svm_guest_mode() first calls nested_vmcb02_prepare_control() to copy
      control fields from VMCB12 to the current VMCB, then
      nested_vmcb02_prepare_save() to perform a similar copy of the save area.
      
      This means that nested_vmcb02_prepare_control() still runs with the
      previous save area values in the current VMCB so it shouldn't take the L2
      guest CS.Base from this area.
      
      Explicitly pull CS.Base from the actual VMCB12 instead in
      enter_svm_guest_mode().
      
      Granted, having a non-zero CS.Base is a very rare thing (and even
      impossible in 64-bit mode), having it change between nested VMRUNs is
      probably even rarer, but if it happens it would create a really subtle bug
      so it's better to fix it upfront.
      
      Fixes: 6ef88d6e ("KVM: SVM: Re-inject INT3/INTO instead of retrying the instruction")
      Signed-off-by: default avatarMaciej S. Szmigiero <maciej.szmigiero@oracle.com>
      Message-Id: <4caa0f67589ae3c22c311ee0e6139496902f2edc.1658159083.git.maciej.szmigiero@oracle.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      da0b93d6
  2. 22 Jul, 2022 1 commit
  3. 20 Jul, 2022 3 commits
  4. 19 Jul, 2022 5 commits
  5. 14 Jul, 2022 8 commits