Commit 5a48a622 authored by Wanpeng Li's avatar Wanpeng Li Committed by Radim Krčmář

x86/kvm: virt_xxx memory barriers instead of mandatory barriers

virt_xxx memory barriers are implemented trivially using the low-level
__smp_xxx macros, __smp_xxx is equal to a compiler barrier for strong
TSO memory model, however, mandatory barriers will unconditional add
memory barriers, this patch replaces the rmb() in kvm_steal_clock() by
virt_rmb().

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: default avatarWanpeng Li <wanpeng.li@hotmail.com>
Reviewed-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
parent bd8fab39
...@@ -396,9 +396,9 @@ static u64 kvm_steal_clock(int cpu) ...@@ -396,9 +396,9 @@ static u64 kvm_steal_clock(int cpu)
src = &per_cpu(steal_time, cpu); src = &per_cpu(steal_time, cpu);
do { do {
version = src->version; version = src->version;
rmb(); virt_rmb();
steal = src->steal; steal = src->steal;
rmb(); virt_rmb();
} while ((version & 1) || (version != src->version)); } while ((version & 1) || (version != src->version));
return steal; return steal;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment