Commit c1edcc41 authored by Sean Christopherson's avatar Sean Christopherson

KVM: x86: Retry to-be-emulated insn in "slow" unprotect path iff sp is zapped

Resume the guest and thus skip emulation of a non-PTE-writing instruction
if and only if unprotecting the gfn actually zapped at least one shadow
page.  If the gfn is write-protected for some reason other than shadow
paging, attempting to unprotect the gfn will effectively fail, and thus
retrying the instruction is all but guaranteed to be pointless.  This bug
has existed for a long time, but was effectively fudged around by the
retry RIP+address anti-loop detection.
Reviewed-by: default avatarYuan Yao <yuan.yao@intel.com>
Link: https://lore.kernel.org/r/20240831001538.336683-6-seanjc@google.comSigned-off-by: default avatarSean Christopherson <seanjc@google.com>
parent 2fb2b787
...@@ -8965,14 +8965,14 @@ static bool retry_instruction(struct x86_emulate_ctxt *ctxt, ...@@ -8965,14 +8965,14 @@ static bool retry_instruction(struct x86_emulate_ctxt *ctxt,
if (ctxt->eip == last_retry_eip && last_retry_addr == cr2_or_gpa) if (ctxt->eip == last_retry_eip && last_retry_addr == cr2_or_gpa)
return false; return false;
vcpu->arch.last_retry_eip = ctxt->eip;
vcpu->arch.last_retry_addr = cr2_or_gpa;
if (!vcpu->arch.mmu->root_role.direct) if (!vcpu->arch.mmu->root_role.direct)
gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL); gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL);
kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); if (!kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)))
return false;
vcpu->arch.last_retry_eip = ctxt->eip;
vcpu->arch.last_retry_addr = cr2_or_gpa;
return true; return true;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment