- 01 Apr, 2021 3 commits
-
-
Paolo Bonzini authored
-
Paolo Bonzini authored
Fixing nested_vmcb_check_save to avoid all TOC/TOU races is a bit harder in released kernels, so do the bare minimum by avoiding that EFER.SVME is cleared. This is problematic because svm_set_efer frees the data structures for nested virtualization if EFER.SVME is cleared. Also check that EFER.SVME remains set after a nested vmexit; clearing it could happen if the bit is zero in the save area that is passed to KVM_SET_NESTED_STATE (the save area of the nested state corresponds to the nested hypervisor's state and is restored on the next nested vmexit). Cc: stable@vger.kernel.org Fixes: 2fcf4876 ("KVM: nSVM: implement on demand allocation of the nested state") Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Avoid races between check and use of the nested VMCB controls. This for example ensures that the VMRUN intercept is always reflected to the nested hypervisor, instead of being processed by the host. Without this patch, it is possible to end up with svm->nested.hsave pointing to the MSR permission bitmap for nested guests. This bug is CVE-2021-29657. Reported-by: Felix Wilhelm <fwilhelm@google.com> Cc: stable@vger.kernel.org Fixes: 2fcf4876 ("KVM: nSVM: implement on demand allocation of the nested state") Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
- 15 Mar, 2021 37 commits
-
-
Sean Christopherson authored
Set the PAE roots used as decrypted to play nice with SME when KVM is using shadow paging. Explicitly skip setting the C-bit when loading CR3 for PAE shadow paging, even though it's completely ignored by the CPU. The extra documentation is nice to have. Note, there are several subtleties at play with NPT. In addition to legacy shadow paging, the PAE roots are used for SVM's NPT when either KVM is 32-bit (uses PAE paging) or KVM is 64-bit and shadowing 32-bit NPT. However, 32-bit Linux, and thus KVM, doesn't support SME. And 64-bit KVM can happily set the C-bit in CR3. This also means that keeping __sme_set(root) for 32-bit KVM when NPT is enabled is conceptually wrong, but functionally ok since SME is 64-bit only. Leave it as is to avoid unnecessary pollution. Fixes: d0ec49d4 ("kvm/x86/svm: Support Secure Memory Encryption within KVM") Cc: stable@vger.kernel.org Cc: Brijesh Singh <brijesh.singh@amd.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210309224207.1218275-5-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Use '0' to denote an invalid pae_root instead of '0' or INVALID_PAGE. Unlike root_hpa, the pae_roots hold permission bits and thus are guaranteed to be non-zero. Having to deal with both values leads to bugs, e.g. failing to set back to INVALID_PAGE, warning on the wrong value, etc... Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210309224207.1218275-4-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Track the address of the top-level EPT struct, a.k.a. the root HPA, instead of the EPTP itself for Hyper-V's paravirt TLB flush. The paravirt API takes only the address, not the full EPTP, and in theory tracking the EPTP could lead to false negatives, e.g. if the HPA matched but the attributes in the EPTP do not. In practice, such a mismatch is extremely unlikely, if not flat out impossible, given how KVM generates the EPTP. Opportunsitically rename the related fields to use the 'root' nomenclature, and to prefix them with 'hv_' to connect them to Hyper-V's paravirt TLB flushing. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210305183123.3978098-12-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Skip additional EPTP flushes if one fails when processing EPTPs for Hyper-V's paravirt TLB flushing. If _any_ flush fails, KVM falls back to a full global flush, i.e. additional flushes are unnecessary (and will likely fail anyways). Continue processing the loop unless a mismatch was already detected, e.g. to handle the case where the first flush fails and there is a yet-to-be-detected mismatch. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210305183123.3978098-11-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Ifdef away the Hyper-V specific fields in structs kvm_vmx and vcpu_vmx as each field has only a single reference outside of the struct itself that isn't already wrapped in ifdeffery (and both are initialization). vcpu_vmx.ept_pointer in particular should be wrapped as it is valid if and only if Hyper-v is active, i.e. non-Hyper-V code cannot rely on it to actually track the current EPTP (without additional code changes). Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210305183123.3978098-10-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Explicitly check that kvm_x86_ops.tlb_remote_flush() points at Hyper-V's implementation for PV flushing instead of assuming that a non-NULL implementation means running on Hyper-V. Wrap the related logic in ifdeffery as hv_remote_flush_tlb() is defined iff CONFIG_HYPERV!=n. Short term, the explicit check makes it more obvious why a non-NULL tlb_remote_flush() triggers EPTP shenanigans. Long term, this will allow TDX to define its own implementation of tlb_remote_flush() without running afoul of Hyper-V. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210305183123.3978098-9-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Don't invalidate the common EPTP, and thus trigger rechecking of EPTPs across all vCPUs, if the new EPTP matches the old/common EPTP. In all likelihood this is a meaningless optimization, but there are (uncommon) scenarios where KVM can reload the same EPTP. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210305183123.3978098-8-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Drop the dedicated 'ept_pointers_match' field in favor of stuffing 'hv_tlb_eptp' with INVALID_PAGE to mark it as invalid, i.e. to denote that there is at least one EPTP mismatch. Use a local variable to track whether or not a mismatch is detected so that hv_tlb_eptp can be used to skip redundant flushes. No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210305183123.3978098-7-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Combine the for-loops for Hyper-V TLB EPTP checking and flushing, and in doing so skip flushes for vCPUs whose EPTP matches the target EPTP. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210305183123.3978098-6-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Fold check_ept_pointer_match() into hv_remote_flush_tlb_with_range() in preparation for combining the kvm_for_each_vcpu loops of the ==CHECK and !=MATCH statements. No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210305183123.3978098-5-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Capture kvm_vmx in a local variable instead of polluting hv_remote_flush_tlb_with_range() with to_kvm_vmx(kvm). No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210305183123.3978098-4-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Explicitly track the EPTP that is common to all vCPUs instead of grabbing vCPU0's EPTP when invoking Hyper-V's paravirt TLB flush. Tracking the EPTP will allow optimizing the checks when loading a new EPTP and will also allow dropping ept_pointer_match, e.g. by marking the common EPTP as invalid. This also technically fixes a bug where KVM could theoretically flush an invalid GPA if all vCPUs have an invalid root. In practice, it's likely impossible to trigger a remote TLB flush in such a scenario. In any case, the superfluous flush is completely benign. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210305183123.3978098-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Retrieve the active PCID only when writing a guest CR3 value, i.e. don't get the PCID when using EPT or NPT. The PCID is especially problematic for EPT as the bits have different meaning, and so the PCID and must be manually stripped, which is annoying and unnecessary. And on VMX, getting the active PCID also involves reading the guest's CR3 and CR4.PCIDE, i.e. may add pointless VMREADs. Opportunistically rename the pgd/pgd_level params to root_hpa and root_level to better reflect their new roles. Keep the function names, as "load the guest PGD" is still accurate/correct. Last, and probably least, pass root_hpa as a hpa_t/u64 instead of an unsigned long. The EPTP holds a 64-bit value, even in 32-bit mode, so in theory EPT could support HIGHMEM for 32-bit KVM. Never mind that doing so would require changing the MMU page allocators and reworking the MMU to use kmap(). Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210305183123.3978098-2-seanjc@google.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Uros Bizjak authored
Avoid jump by moving exception fixups out of line. Cc: Sean Christopherson <seanjc@google.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Message-Id: <20210226125621.111723-1-ubizjak@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Debugging unexpected reserved bit page faults sucks. Dump the reserved bits that (likely) caused the page fault to make debugging suck a little less. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-25-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Use low "available" bits to tag REMOVED SPTEs. Using a high bit is moderately costly as it often causes the compiler to generate a 64-bit immediate. More importantly, this makes it very clear REMOVED_SPTE is a value, not a flag. Cc: Ben Gardon <bgardon@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-24-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Use the is_removed_spte() helper instead of open coding the check. No functional change intended. Cc: Ben Gardon <bgardon@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-23-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Tweak the MMU_WARN that guards against weirdness when querying A/D status to fire on a !MMU_PRESENT SPTE, as opposed to a MMIO SPTE. Attempting to query A/D status on any kind of !MMU_PRESENT SPTE, MMIO or otherwise, indicates a KVM bug. Case in point, several now-fixed bugs were identified by enabling this new WARN. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-22-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Introduce MMU_PRESENT to explicitly track which SPTEs are "present" from the MMU's perspective. Checking for shadow-present SPTEs is a very common operation for the MMU, particularly in hot paths such as page faults. With the addition of "removed" SPTEs for the TDP MMU, identifying shadow-present SPTEs is quite costly especially since it requires checking multiple 64-bit values. On 64-bit KVM, this reduces the footprint of kvm.ko's .text by ~2k bytes. On 32-bit KVM, this increases the footprint by ~200 bytes, but only because gcc now inlines several more MMU helpers, e.g. drop_parent_pte(). We now need to drop bit 11, used for the MMU_PRESENT flag, from the set of bits used to store the generation number in MMIO SPTEs. Otherwise MMIO SPTEs with bit 11 set would get false positives for is_shadow_present_spte() and lead to a variety of fireworks, from oopses to likely hangs of the host kernel. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-21-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Use bits 57 and 58 for HOST_WRITABLE and MMU_WRITABLE when using EPT. This will allow using bit 11 as a constant MMU_PRESENT, which is desirable as checking for a shadow-present SPTE is one of the most common SPTE operations in KVM, particular in hot paths such as page faults. EPT is short on low available bits; currently only bit 11 is the only always-available bit. Bit 10 is also available, but only while KVM doesn't support mode-based execution. On the other hand, PAE paging doesn't have _any_ high available bits. Thus, using bit 11 is the only feasible option for MMU_PRESENT. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-20-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Make the location of the HOST_WRITABLE and MMU_WRITABLE configurable for a given KVM instance. This will allow EPT to use high available bits, which in turn will free up bit 11 for a constant MMU_PRESENT bit. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-19-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Let the MMU deal with the SPTE masks to avoid splitting the logic and knowledge across the MMU and VMX. The SPTE masks that are used for EPT are very, very tightly coupled to the MMU implementation. The use of available bits, the existence of A/D types, the fact that shadow_x_mask even exists, and so on and so forth are all baked into the MMU implementation. Cross referencing the params to the masks is also a nightmare, as pretty much every param is a u64. A future patch will make the location of the MMU_WRITABLE and HOST_WRITABLE bits MMU specific, to free up bit 11 for a MMU_PRESENT bit. Doing that change with the current kvm_mmu_set_mask_ptes() would be an absolute mess. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-18-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Squish all the code for (re)setting the various SPTE masks into one location. With the split code, it's not at all clear that the masks are set once during module initialization. This will allow a future patch to clean up initialization of the masks without shuffling code all over tarnation. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-17-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Move kvm_mmu_set_mask_ptes() into mmu.c as prep for future cleanup of the mask initialization code. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-16-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Document that SHADOW_ACC_TRACK_SAVED_BITS_SHIFT is directly dependent on bits 53:52 being used to track the A/D type. Remove PT64_SECOND_AVAIL_BITS_SHIFT as it is at best misleading, and at worst wrong. For PAE paging, which arguably is a variant of PT64, the bits are reserved. For MMIO SPTEs the bits are not available as they're used for the MMIO generation. For access tracked SPTEs, they are also not available as bits 56:54 are used to store the original RX bits. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-15-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Use bits 53 and 52 for the MMIO generation now that they're not used to identify MMIO SPTEs. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-14-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Rename the various A/D status defines to explicitly associated them with TDP. There is a subtle dependency on the bits in question never being set when using PAE paging, as those bits are reserved, not available. I.e. using these bits outside of TDP (technically EPT) would cause explosions. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-13-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Add a module param to disable MMIO caching so that it's possible to test the related flows without access to the necessary hardware. Using shadow paging with 64-bit KVM and 52 bits of physical address space must disable MMIO caching as there are no reserved bits to be had. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-12-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Stop tagging MMIO SPTEs with specific available bits and instead detect MMIO SPTEs by checking for their unique SPTE value. The value is guaranteed to be unique on shadow paging and NPT as setting reserved physical address bits on any other type of SPTE would consistute a KVM bug. Ditto for EPT, as creating a WX non-MMIO would also be a bug. Note, this approach is also future-compatibile with TDX, which will need to reflect MMIO EPT violations as #VEs into the guest. To create an EPT violation instead of a misconfig, TDX EPTs will need to have RWX=0, But, MMIO SPTEs will also be the only case where KVM clears SUPPRESS_VE, so MMIO SPTEs will still be guaranteed to have a unique value within a given MMU context. The main motivation is to make it easier to reason about which types of SPTEs use which available bits. As a happy side effect, this frees up two more bits for storing the MMIO generation. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-11-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
The value returned by make_mmio_spte() is a SPTE, it is not a mask. Name it accordingly. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-10-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Remove TDP MMU's call to trace_kvm_mmu_set_spte() that is done for both shadow-present SPTEs and MMIO SPTEs. It's fully redundant for the former, and unnecessary for the latter. This aligns TDP MMU tracing behavior with that of the legacy MMU. Fixes: 33dd3574 ("kvm: x86/mmu: Add existing trace points to TDP MMU") Cc: Ben Gardon <bgardon@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-9-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Now that it should be impossible to convert a valid SPTE to an MMIO SPTE, handle MMIO SPTEs early in mmu_set_spte() without going through set_spte() and all the logic for removing an existing, valid SPTE. The other caller of set_spte(), FNAME(sync_page)(), explicitly handles MMIO SPTEs prior to calling set_spte(). This simplifies mmu_set_spte() and set_spte(), and also "fixes" an oddity where MMIO SPTEs are traced by both trace_kvm_mmu_set_spte() and trace_mark_mmio_spte(). Note, mmu_spte_set() will WARN if this new approach causes KVM to create an MMIO SPTE overtop a valid SPTE. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-8-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
If MMIO caching is disabled, e.g. when using shadow paging on CPUs with 52 bits of PA space, go straight to MMIO emulation and don't install an MMIO SPTE. The SPTE will just generate a !PRESENT #PF, i.e. can't actually accelerate future MMIO. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-7-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Retry page faults (re-enter the guest) that hit an invalid memslot instead of treating the memslot as not existing, i.e. handling the page fault as an MMIO access. When deleting a memslot, SPTEs aren't zapped and the TLBs aren't flushed until after the memslot has been marked invalid. Handling the invalid slot as MMIO means there's a small window where a page fault could replace a valid SPTE with an MMIO SPTE. The legacy MMU handles such a scenario cleanly, but the TDP MMU assumes such behavior is impossible (see the BUG() in __handle_changed_spte()). There's really no good reason why the legacy MMU should allow such a scenario, and closing this hole allows for additional cleanups. Fixes: 2f2fad08 ("kvm: x86/mmu: Add functions to handle changed TDP SPTEs") Cc: Ben Gardon <bgardon@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-6-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Disable MMIO caching if the MMIO value collides with the L1TF mitigation that usurps high PFN bits. In practice this should never happen as only CPUs with SME support can generate such a collision (because the MMIO value can theoretically get adjusted into legal memory), and no CPUs exist that support SME and are susceptible to L1TF. But, closing the hole is trivial. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-5-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Bail from fast_page_fault() if the SPTE is not a shadow-present SPTE. Functionally, this is not strictly necessary as the !is_access_allowed() check will eventually reject the fast path, but an early check on shadow-present skips unnecessary checks and will allow a future patch to tweak the A/D status auditing to warn if KVM attempts to query A/D bits without first ensuring the SPTE is a shadow-present SPTE. Note, is_shadow_present_pte() is quite expensive at this time, i.e. this might be a net negative in the short term. A future patch will optimize is_shadow_present_pte() to a single AND operation and remedy the issue. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-4-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
When updating accessed and dirty bits, check that the new SPTE is present before attempting to query its A/D bits. Failure to confirm the SPTE is present can theoretically cause a false negative, e.g. if a MMIO SPTE replaces a "real" SPTE and somehow the PFNs magically match. Realistically, this is all but guaranteed to be a benign bug. Fix it up primarily so that a future patch can tweak the MMU_WARN_ON checking A/D status to fire if the SPTE is not-present. Fixes: f8e14497 ("kvm: x86/mmu: Add access tracking for tdp_mmu") Cc: Ben Gardon <bgardon@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210225204749.1512652-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-