Commit 8f4dc2e7 authored by Sean Christopherson's avatar Sean Christopherson Committed by Paolo Bonzini

KVM: x86: Don't clear EFER during SMM transitions for 32-bit vCPU

Neither AMD nor Intel CPUs have an EFER field in the legacy SMRAM save
state area, i.e. don't save/restore EFER across SMM transitions.  KVM
somewhat models this, e.g. doesn't clear EFER on entry to SMM if the
guest doesn't support long mode.  But during RSM, KVM unconditionally
clears EFER so that it can get back to pure 32-bit mode in order to
start loading CRs with their actual non-SMM values.

Clear EFER only when it will be written when loading the non-SMM state
so as to preserve bits that can theoretically be set on 32-bit vCPUs,
e.g. KVM always emulates EFER_SCE.

And because CR4.PAE is cleared only to play nice with EFER, wrap that
code in the long mode check as well.  Note, this may result in a
compiler warning about cr4 being consumed uninitialized.  Re-read CR4
even though it's technically unnecessary, as doing so allows for more
readable code and RSM emulation is not a performance critical path.

Fixes: 660a5d51 ("KVM: x86: save/load state on SMM switch")
Cc: stable@vger.kernel.org
Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent 9ec19493
...@@ -2582,15 +2582,13 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt) ...@@ -2582,15 +2582,13 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt)
* CR0/CR3/CR4/EFER. It's all a bit more complicated if the vCPU * CR0/CR3/CR4/EFER. It's all a bit more complicated if the vCPU
* supports long mode. * supports long mode.
*/ */
cr4 = ctxt->ops->get_cr(ctxt, 4);
if (emulator_has_longmode(ctxt)) { if (emulator_has_longmode(ctxt)) {
struct desc_struct cs_desc; struct desc_struct cs_desc;
/* Zero CR4.PCIDE before CR0.PG. */ /* Zero CR4.PCIDE before CR0.PG. */
if (cr4 & X86_CR4_PCIDE) { cr4 = ctxt->ops->get_cr(ctxt, 4);
if (cr4 & X86_CR4_PCIDE)
ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PCIDE); ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PCIDE);
cr4 &= ~X86_CR4_PCIDE;
}
/* A 32-bit code segment is required to clear EFER.LMA. */ /* A 32-bit code segment is required to clear EFER.LMA. */
memset(&cs_desc, 0, sizeof(cs_desc)); memset(&cs_desc, 0, sizeof(cs_desc));
...@@ -2604,13 +2602,16 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt) ...@@ -2604,13 +2602,16 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt)
if (cr0 & X86_CR0_PE) if (cr0 & X86_CR0_PE)
ctxt->ops->set_cr(ctxt, 0, cr0 & ~(X86_CR0_PG | X86_CR0_PE)); ctxt->ops->set_cr(ctxt, 0, cr0 & ~(X86_CR0_PG | X86_CR0_PE));
/* Now clear CR4.PAE (which must be done before clearing EFER.LME). */ if (emulator_has_longmode(ctxt)) {
if (cr4 & X86_CR4_PAE) /* Clear CR4.PAE before clearing EFER.LME. */
ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PAE); cr4 = ctxt->ops->get_cr(ctxt, 4);
if (cr4 & X86_CR4_PAE)
ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PAE);
/* And finally go back to 32-bit mode. */ /* And finally go back to 32-bit mode. */
efer = 0; efer = 0;
ctxt->ops->set_msr(ctxt, MSR_EFER, efer); ctxt->ops->set_msr(ctxt, MSR_EFER, efer);
}
/* /*
* Give pre_leave_smm() a chance to make ISA-specific changes to the * Give pre_leave_smm() a chance to make ISA-specific changes to the
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment