Commit 421f090c authored by Dexuan Cui's avatar Dexuan Cui Committed by Wei Liu

x86/hyperv: Suspend/resume the VP assist page for hibernation

Unlike the other CPUs, CPU0 is never offlined during hibernation, so in the
resume path, the "new" kernel's VP assist page is not suspended (i.e. not
disabled), and later when we jump to the "old" kernel, the page is not
properly re-enabled for CPU0 with the allocated page from the old kernel.

So far, the VP assist page is used by hv_apic_eoi_write(), and is also
used in the case of nested virtualization (running KVM atop Hyper-V).

For hv_apic_eoi_write(), when the page is not properly re-enabled,
hvp->apic_assist is always 0, so the HV_X64_MSR_EOI MSR is always written.
This is not ideal with respect to performance, but Hyper-V can still
correctly handle this according to the Hyper-V spec; nevertheless, Linux
still must update the Hyper-V hypervisor with the correct VP assist page
to prevent Hyper-V from writing to the stale page, which causes guest
memory corruption and consequently may have caused the hangs and triple
faults seen during non-boot CPUs resume.

Fix the issue by calling hv_cpu_die()/hv_cpu_init() in the syscore ops.
Without the fix, hibernation can fail at a rate of 1/300 ~ 1/500.
With the fix, hibernation can pass a long-haul test of 2000 runs.

In the case of nested virtualization, disabling/reenabling the assist
page upon hibernation may be unsafe if there are active L2 guests.
It looks KVM should be enhanced to abort the hibernation request if
there is any active L2 guest.

Fixes: 05bd330a ("x86/hyperv: Suspend/resume the hypercall page for hibernation")
Cc: stable@vger.kernel.org
Signed-off-by: default avatarDexuan Cui <decui@microsoft.com>
Link: https://lore.kernel.org/r/1587437171-2472-1-git-send-email-decui@microsoft.comSigned-off-by: default avatarWei Liu <wei.liu@kernel.org>
parent 2ddddd0b
...@@ -73,7 +73,8 @@ static int hv_cpu_init(unsigned int cpu) ...@@ -73,7 +73,8 @@ static int hv_cpu_init(unsigned int cpu)
struct page *pg; struct page *pg;
input_arg = (void **)this_cpu_ptr(hyperv_pcpu_input_arg); input_arg = (void **)this_cpu_ptr(hyperv_pcpu_input_arg);
pg = alloc_page(GFP_KERNEL); /* hv_cpu_init() can be called with IRQs disabled from hv_resume() */
pg = alloc_page(irqs_disabled() ? GFP_ATOMIC : GFP_KERNEL);
if (unlikely(!pg)) if (unlikely(!pg))
return -ENOMEM; return -ENOMEM;
*input_arg = page_address(pg); *input_arg = page_address(pg);
...@@ -254,6 +255,7 @@ static int __init hv_pci_init(void) ...@@ -254,6 +255,7 @@ static int __init hv_pci_init(void)
static int hv_suspend(void) static int hv_suspend(void)
{ {
union hv_x64_msr_hypercall_contents hypercall_msr; union hv_x64_msr_hypercall_contents hypercall_msr;
int ret;
/* /*
* Reset the hypercall page as it is going to be invalidated * Reset the hypercall page as it is going to be invalidated
...@@ -270,12 +272,17 @@ static int hv_suspend(void) ...@@ -270,12 +272,17 @@ static int hv_suspend(void)
hypercall_msr.enable = 0; hypercall_msr.enable = 0;
wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64); wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
return 0; ret = hv_cpu_die(0);
return ret;
} }
static void hv_resume(void) static void hv_resume(void)
{ {
union hv_x64_msr_hypercall_contents hypercall_msr; union hv_x64_msr_hypercall_contents hypercall_msr;
int ret;
ret = hv_cpu_init(0);
WARN_ON(ret);
/* Re-enable the hypercall page */ /* Re-enable the hypercall page */
rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64); rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
...@@ -288,6 +295,7 @@ static void hv_resume(void) ...@@ -288,6 +295,7 @@ static void hv_resume(void)
hv_hypercall_pg_saved = NULL; hv_hypercall_pg_saved = NULL;
} }
/* Note: when the ops are called, only CPU0 is online and IRQs are disabled. */
static struct syscore_ops hv_syscore_ops = { static struct syscore_ops hv_syscore_ops = {
.suspend = hv_suspend, .suspend = hv_suspend,
.resume = hv_resume, .resume = hv_resume,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment