Commit 5b88cda6 authored by Suresh E. Warrier's avatar Suresh E. Warrier Committed by Alexander Graf

KVM: PPC: Book3S HV: Fix inaccuracies in ICP emulation for H_IPI

This fixes some inaccuracies in the state machine for the virtualized
ICP when implementing the H_IPI hcall (Set_MFFR and related states):

1. The old code wipes out any pending interrupts when the new MFRR is
   more favored than the CPPR but less favored than a pending
   interrupt (by always modifying xisr and the pending_pri). This can
   cause us to lose a pending external interrupt.

   The correct code here is to only modify the pending_pri and xisr in
   the ICP if the MFRR is equal to or more favored than the current
   pending pri (since in this case, it is guaranteed that that there
   cannot be a pending external interrupt). The code changes are
   required in both kvmppc_rm_h_ipi and kvmppc_h_ipi.

2. Again, in both kvmppc_rm_h_ipi and kvmppc_h_ipi, there is a check
   for whether MFRR is being made less favored AND further if new MFFR
   is also less favored than the current CPPR, we check for any
   resends pending in the ICP. These checks look like they are
   designed to cover the case where if the MFRR is being made less
   favored, we opportunistically trigger a resend of any interrupts
   that had been previously rejected. Although, this is not a state
   described by PAPR, this is an action we actually need to do
   especially if the CPPR is already at 0xFF.  Because in this case,
   the resend bit will stay on until another ICP state change which
   may be a long time coming and the interrupt stays pending until
   then. The current code which checks for MFRR < CPPR is broken when
   CPPR is 0xFF since it will not get triggered in that case.

   Ideally, we would want to do a resend only if

   	prio(pending_interrupt) < mfrr && prio(pending_interrupt) < cppr

   where pending interrupt is the one that was rejected. But we don't
   have the priority of the pending interrupt state saved, so we
   simply trigger a resend whenever the MFRR is made less favored.

3. In kvmppc_rm_h_ipi, where we save state to pass resends to the
   virtual mode, we also need to save the ICP whose need_resend we
   reset since this does not need to be my ICP (vcpu->arch.icp) as is
   incorrectly assumed by the current code. A new field rm_resend_icp
   is added to the kvmppc_icp structure for this purpose.
Signed-off-by: default avatarSuresh Warrier <warrier@linux.vnet.ibm.com>
Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
parent b4a83900
...@@ -183,8 +183,10 @@ static void icp_rm_down_cppr(struct kvmppc_xics *xics, struct kvmppc_icp *icp, ...@@ -183,8 +183,10 @@ static void icp_rm_down_cppr(struct kvmppc_xics *xics, struct kvmppc_icp *icp,
* state update in HW (ie bus transactions) so we can handle them * state update in HW (ie bus transactions) so we can handle them
* separately here as well. * separately here as well.
*/ */
if (resend) if (resend) {
icp->rm_action |= XICS_RM_CHECK_RESEND; icp->rm_action |= XICS_RM_CHECK_RESEND;
icp->rm_resend_icp = icp;
}
} }
...@@ -254,10 +256,25 @@ int kvmppc_rm_h_ipi(struct kvm_vcpu *vcpu, unsigned long server, ...@@ -254,10 +256,25 @@ int kvmppc_rm_h_ipi(struct kvm_vcpu *vcpu, unsigned long server,
* nothing needs to be done as there can be no XISR to * nothing needs to be done as there can be no XISR to
* reject. * reject.
* *
* ICP state: Check_IPI
*
* If the CPPR is less favored, then we might be replacing * If the CPPR is less favored, then we might be replacing
* an interrupt, and thus need to possibly reject it as in * an interrupt, and thus need to possibly reject it.
* *
* ICP state: Check_IPI * ICP State: IPI
*
* Besides rejecting any pending interrupts, we also
* update XISR and pending_pri to mark IPI as pending.
*
* PAPR does not describe this state, but if the MFRR is being
* made less favored than its earlier value, there might be
* a previously-rejected interrupt needing to be resent.
* Ideally, we would want to resend only if
* prio(pending_interrupt) < mfrr &&
* prio(pending_interrupt) < cppr
* where pending interrupt is the one that was rejected. But
* we don't have that state, so we simply trigger a resend
* whenever the MFRR is made less favored.
*/ */
do { do {
old_state = new_state = ACCESS_ONCE(icp->state); old_state = new_state = ACCESS_ONCE(icp->state);
...@@ -270,13 +287,14 @@ int kvmppc_rm_h_ipi(struct kvm_vcpu *vcpu, unsigned long server, ...@@ -270,13 +287,14 @@ int kvmppc_rm_h_ipi(struct kvm_vcpu *vcpu, unsigned long server,
resend = false; resend = false;
if (mfrr < new_state.cppr) { if (mfrr < new_state.cppr) {
/* Reject a pending interrupt if not an IPI */ /* Reject a pending interrupt if not an IPI */
if (mfrr <= new_state.pending_pri) if (mfrr <= new_state.pending_pri) {
reject = new_state.xisr; reject = new_state.xisr;
new_state.pending_pri = mfrr; new_state.pending_pri = mfrr;
new_state.xisr = XICS_IPI; new_state.xisr = XICS_IPI;
}
} }
if (mfrr > old_state.mfrr && mfrr > new_state.cppr) { if (mfrr > old_state.mfrr) {
resend = new_state.need_resend; resend = new_state.need_resend;
new_state.need_resend = 0; new_state.need_resend = 0;
} }
...@@ -289,8 +307,10 @@ int kvmppc_rm_h_ipi(struct kvm_vcpu *vcpu, unsigned long server, ...@@ -289,8 +307,10 @@ int kvmppc_rm_h_ipi(struct kvm_vcpu *vcpu, unsigned long server,
} }
/* Pass resends to virtual mode */ /* Pass resends to virtual mode */
if (resend) if (resend) {
this_icp->rm_action |= XICS_RM_CHECK_RESEND; this_icp->rm_action |= XICS_RM_CHECK_RESEND;
this_icp->rm_resend_icp = icp;
}
return check_too_hard(xics, this_icp); return check_too_hard(xics, this_icp);
} }
......
...@@ -613,10 +613,25 @@ static noinline int kvmppc_h_ipi(struct kvm_vcpu *vcpu, unsigned long server, ...@@ -613,10 +613,25 @@ static noinline int kvmppc_h_ipi(struct kvm_vcpu *vcpu, unsigned long server,
* there might be a previously-rejected interrupt needing * there might be a previously-rejected interrupt needing
* to be resent. * to be resent.
* *
* ICP state: Check_IPI
*
* If the CPPR is less favored, then we might be replacing * If the CPPR is less favored, then we might be replacing
* an interrupt, and thus need to possibly reject it as in * an interrupt, and thus need to possibly reject it.
* *
* ICP state: Check_IPI * ICP State: IPI
*
* Besides rejecting any pending interrupts, we also
* update XISR and pending_pri to mark IPI as pending.
*
* PAPR does not describe this state, but if the MFRR is being
* made less favored than its earlier value, there might be
* a previously-rejected interrupt needing to be resent.
* Ideally, we would want to resend only if
* prio(pending_interrupt) < mfrr &&
* prio(pending_interrupt) < cppr
* where pending interrupt is the one that was rejected. But
* we don't have that state, so we simply trigger a resend
* whenever the MFRR is made less favored.
*/ */
do { do {
old_state = new_state = ACCESS_ONCE(icp->state); old_state = new_state = ACCESS_ONCE(icp->state);
...@@ -629,13 +644,14 @@ static noinline int kvmppc_h_ipi(struct kvm_vcpu *vcpu, unsigned long server, ...@@ -629,13 +644,14 @@ static noinline int kvmppc_h_ipi(struct kvm_vcpu *vcpu, unsigned long server,
resend = false; resend = false;
if (mfrr < new_state.cppr) { if (mfrr < new_state.cppr) {
/* Reject a pending interrupt if not an IPI */ /* Reject a pending interrupt if not an IPI */
if (mfrr <= new_state.pending_pri) if (mfrr <= new_state.pending_pri) {
reject = new_state.xisr; reject = new_state.xisr;
new_state.pending_pri = mfrr; new_state.pending_pri = mfrr;
new_state.xisr = XICS_IPI; new_state.xisr = XICS_IPI;
}
} }
if (mfrr > old_state.mfrr && mfrr > new_state.cppr) { if (mfrr > old_state.mfrr) {
resend = new_state.need_resend; resend = new_state.need_resend;
new_state.need_resend = 0; new_state.need_resend = 0;
} }
...@@ -789,7 +805,7 @@ static noinline int kvmppc_xics_rm_complete(struct kvm_vcpu *vcpu, u32 hcall) ...@@ -789,7 +805,7 @@ static noinline int kvmppc_xics_rm_complete(struct kvm_vcpu *vcpu, u32 hcall)
if (icp->rm_action & XICS_RM_KICK_VCPU) if (icp->rm_action & XICS_RM_KICK_VCPU)
kvmppc_fast_vcpu_kick(icp->rm_kick_target); kvmppc_fast_vcpu_kick(icp->rm_kick_target);
if (icp->rm_action & XICS_RM_CHECK_RESEND) if (icp->rm_action & XICS_RM_CHECK_RESEND)
icp_check_resend(xics, icp); icp_check_resend(xics, icp->rm_resend_icp);
if (icp->rm_action & XICS_RM_REJECT) if (icp->rm_action & XICS_RM_REJECT)
icp_deliver_irq(xics, icp, icp->rm_reject); icp_deliver_irq(xics, icp, icp->rm_reject);
if (icp->rm_action & XICS_RM_NOTIFY_EOI) if (icp->rm_action & XICS_RM_NOTIFY_EOI)
......
...@@ -74,6 +74,7 @@ struct kvmppc_icp { ...@@ -74,6 +74,7 @@ struct kvmppc_icp {
#define XICS_RM_NOTIFY_EOI 0x8 #define XICS_RM_NOTIFY_EOI 0x8
u32 rm_action; u32 rm_action;
struct kvm_vcpu *rm_kick_target; struct kvm_vcpu *rm_kick_target;
struct kvmppc_icp *rm_resend_icp;
u32 rm_reject; u32 rm_reject;
u32 rm_eoied_irq; u32 rm_eoied_irq;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment