Commit 9932b49e authored by Sean Christopherson's avatar Sean Christopherson Committed by Paolo Bonzini

KVM: nVMX: Invoke ept_save_pdptrs() if and only if PAE paging is enabled

Invoke ept_save_pdptrs() when restoring L1's host state on a "late"
VM-Fail if and only if PAE paging is enabled.  This saves a CALL in the
common case where L1 is a 64-bit host, and avoids incorrectly marking
the PDPTRs as dirty.

WARN if ept_save_pdptrs() is called with PAE disabled now that the
nested usage pre-checks is_pae_paging().  Barring a bug in KVM's MMU,
attempting to read the PDPTRs with PAE disabled is now impossible.
Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200415203454.8296-2-sean.j.christopherson@intel.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent 4dcefa31
...@@ -4250,7 +4250,7 @@ static void nested_vmx_restore_host_state(struct kvm_vcpu *vcpu) ...@@ -4250,7 +4250,7 @@ static void nested_vmx_restore_host_state(struct kvm_vcpu *vcpu)
* VMFail, like everything else we just need to ensure our * VMFail, like everything else we just need to ensure our
* software model is up-to-date. * software model is up-to-date.
*/ */
if (enable_ept) if (enable_ept && is_pae_paging(vcpu))
ept_save_pdptrs(vcpu); ept_save_pdptrs(vcpu);
kvm_mmu_reset_context(vcpu); kvm_mmu_reset_context(vcpu);
......
...@@ -2937,12 +2937,13 @@ void ept_save_pdptrs(struct kvm_vcpu *vcpu) ...@@ -2937,12 +2937,13 @@ void ept_save_pdptrs(struct kvm_vcpu *vcpu)
{ {
struct kvm_mmu *mmu = vcpu->arch.walk_mmu; struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
if (is_pae_paging(vcpu)) { if (WARN_ON_ONCE(!is_pae_paging(vcpu)))
mmu->pdptrs[0] = vmcs_read64(GUEST_PDPTR0); return;
mmu->pdptrs[1] = vmcs_read64(GUEST_PDPTR1);
mmu->pdptrs[2] = vmcs_read64(GUEST_PDPTR2); mmu->pdptrs[0] = vmcs_read64(GUEST_PDPTR0);
mmu->pdptrs[3] = vmcs_read64(GUEST_PDPTR3); mmu->pdptrs[1] = vmcs_read64(GUEST_PDPTR1);
} mmu->pdptrs[2] = vmcs_read64(GUEST_PDPTR2);
mmu->pdptrs[3] = vmcs_read64(GUEST_PDPTR3);
kvm_register_mark_dirty(vcpu, VCPU_EXREG_PDPTR); kvm_register_mark_dirty(vcpu, VCPU_EXREG_PDPTR);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment