Commit cde9af6e authored by Andrew Jones's avatar Andrew Jones Committed by Paolo Bonzini

KVM: add explicit barrier to kvm_vcpu_kick

kvm_vcpu_kick() must issue a general memory barrier prior to reading
vcpu->mode in order to ensure correctness of the mutual-exclusion
memory barrier pattern used with vcpu->requests.  While the cmpxchg
called from kvm_vcpu_kick():

 kvm_vcpu_kick
   kvm_arch_vcpu_should_kick
     kvm_vcpu_exiting_guest_mode
       cmpxchg

implies general memory barriers before and after the operation, that
implication is only valid when cmpxchg succeeds.  We need an explicit
barrier for when it fails, otherwise a VCPU thread on its entry path
that reads zero for vcpu->requests does not exclude the possibility
the requesting thread sees !IN_GUEST_MODE when it reads vcpu->mode.

kvm_make_all_cpus_request already had a barrier, so we remove it, as
now it would be redundant.
Signed-off-by: default avatarAndrew Jones <drjones@redhat.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent 6c6e8360
...@@ -6853,7 +6853,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) ...@@ -6853,7 +6853,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
/* /*
* 1) We should set ->mode before checking ->requests. Please see * 1) We should set ->mode before checking ->requests. Please see
* the comment in kvm_make_all_cpus_request. * the comment in kvm_vcpu_exiting_guest_mode().
* *
* 2) For APICv, we should set ->mode before checking PIR.ON. This * 2) For APICv, we should set ->mode before checking PIR.ON. This
* pairs with the memory barrier implicit in pi_test_and_set_on * pairs with the memory barrier implicit in pi_test_and_set_on
......
...@@ -270,6 +270,12 @@ struct kvm_vcpu { ...@@ -270,6 +270,12 @@ struct kvm_vcpu {
static inline int kvm_vcpu_exiting_guest_mode(struct kvm_vcpu *vcpu) static inline int kvm_vcpu_exiting_guest_mode(struct kvm_vcpu *vcpu)
{ {
/*
* The memory barrier ensures a previous write to vcpu->requests cannot
* be reordered with the read of vcpu->mode. It pairs with the general
* memory barrier following the write of vcpu->mode in VCPU RUN.
*/
smp_mb__before_atomic();
return cmpxchg(&vcpu->mode, IN_GUEST_MODE, EXITING_GUEST_MODE); return cmpxchg(&vcpu->mode, IN_GUEST_MODE, EXITING_GUEST_MODE);
} }
......
...@@ -183,9 +183,6 @@ bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req) ...@@ -183,9 +183,6 @@ bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req)
kvm_make_request(req, vcpu); kvm_make_request(req, vcpu);
cpu = vcpu->cpu; cpu = vcpu->cpu;
/* Set ->requests bit before we read ->mode. */
smp_mb__after_atomic();
if (!(req & KVM_REQUEST_NO_WAKEUP)) if (!(req & KVM_REQUEST_NO_WAKEUP))
kvm_vcpu_wake_up(vcpu); kvm_vcpu_wake_up(vcpu);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment