Commit 523379cd authored by Jintack Lim's avatar Jintack Lim Committed by Greg Kroah-Hartman

KVM: arm/arm64: Let vcpu thread modify its own active state

commit 370a0ec1 upstream.

Currently, if a vcpu thread tries to change the active state of an
interrupt which is already on the same vcpu's AP list, it will loop
forever. Since the VGIC mmio handler is called after a vcpu has
already synced back the LR state to the struct vgic_irq, we can just
let it proceed safely.
Reviewed-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
Signed-off-by: default avatarJintack Lim <jintack@cs.columbia.edu>
Signed-off-by: default avatarChristoffer Dall <cdall@linaro.org>
Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 1f9175b9
...@@ -187,21 +187,37 @@ unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu, ...@@ -187,21 +187,37 @@ unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu,
static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq, static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
bool new_active_state) bool new_active_state)
{ {
struct kvm_vcpu *requester_vcpu;
spin_lock(&irq->irq_lock); spin_lock(&irq->irq_lock);
/*
* The vcpu parameter here can mean multiple things depending on how
* this function is called; when handling a trap from the kernel it
* depends on the GIC version, and these functions are also called as
* part of save/restore from userspace.
*
* Therefore, we have to figure out the requester in a reliable way.
*
* When accessing VGIC state from user space, the requester_vcpu is
* NULL, which is fine, because we guarantee that no VCPUs are running
* when accessing VGIC state from user space so irq->vcpu->cpu is
* always -1.
*/
requester_vcpu = kvm_arm_get_running_vcpu();
/* /*
* If this virtual IRQ was written into a list register, we * If this virtual IRQ was written into a list register, we
* have to make sure the CPU that runs the VCPU thread has * have to make sure the CPU that runs the VCPU thread has
* synced back LR state to the struct vgic_irq. We can only * synced back the LR state to the struct vgic_irq.
* know this for sure, when either this irq is not assigned to
* anyone's AP list anymore, or the VCPU thread is not
* running on any CPUs.
* *
* In the opposite case, we know the VCPU thread may be on its * As long as the conditions below are true, we know the VCPU thread
* way back from the guest and still has to sync back this * may be on its way back from the guest (we kicked the VCPU thread in
* IRQ, so we release and re-acquire the spin_lock to let the * vgic_change_active_prepare) and still has to sync back this IRQ,
* other thread sync back the IRQ. * so we release and re-acquire the spin_lock to let the other thread
* sync back the IRQ.
*/ */
while (irq->vcpu && /* IRQ may have state in an LR somewhere */ while (irq->vcpu && /* IRQ may have state in an LR somewhere */
irq->vcpu != requester_vcpu && /* Current thread is not the VCPU thread */
irq->vcpu->cpu != -1) /* VCPU thread is running */ irq->vcpu->cpu != -1) /* VCPU thread is running */
cond_resched_lock(&irq->irq_lock); cond_resched_lock(&irq->irq_lock);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment