Commit b2e601d1 authored by Andre Detsch's avatar Andre Detsch Committed by Jeremy Kerr

powerpc/spufs: Fix possible scheduling of a context to multiple SPEs

We currently have a race when scheduling a context to a SPE -
after we have found a runnable context in spusched_tick, the same
context may have been scheduled by spu_activate().

This may result in a panic if we try to unschedule a context that has
been freed in the meantime.

This change exits spu_schedule() if the context has already been
scheduled, so we don't end up scheduling it twice.
Signed-off-by: default avatarAndre Detsch <adetsch@br.ibm.com>
Signed-off-by: default avatarJeremy Kerr <jk@ozlabs.org>
parent b65fe035
...@@ -728,6 +728,7 @@ static void spu_schedule(struct spu *spu, struct spu_context *ctx) ...@@ -728,6 +728,7 @@ static void spu_schedule(struct spu *spu, struct spu_context *ctx)
/* not a candidate for interruptible because it's called either /* not a candidate for interruptible because it's called either
from the scheduler thread or from spu_deactivate */ from the scheduler thread or from spu_deactivate */
mutex_lock(&ctx->state_mutex); mutex_lock(&ctx->state_mutex);
if (ctx->state == SPU_STATE_SAVED)
__spu_schedule(spu, ctx); __spu_schedule(spu, ctx);
spu_release(ctx); spu_release(ctx);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment